The first SQL statement, Tbl1 and tbl2 for the left join, if tbl2 kind of id the same record there are very many (assuming the same id of the record 10,000), then, this statement in the left join, the amount of data is very large, and then grouping, filtering to take the largest, the performance of the lower. However, if there are multiple records with the same id, itme, and name in tbl1, the end result will not be duplicate records
The second SQL statement first groups tbl2 into simple groups, and then filters a large portion of the data in the left join if there are many records with the same id, thus improving performance. This can improve performance, but if there are duplicates in tbl1, there may be duplicates in the result of the SQL query. If you are sure that tbl1 does not have duplicates, it is recommended that you use this SQL. If there are duplicates, you can add a distinct statement to select, but of course, performance will be reduced.
The third SQL statement can be said to be, basically, a wrong sql statement, the id are max, and finally there is only one record, the id is the largest record, itme is the largest query out of the largest, the name is the largest, the time is the largest data can be left-connected. Therefore, this sql should be said to be not much sense sql
For sql query, if you want to test its performance, if Oracle, you can use the execution plan to see the execution of SQL, you can also look at the time of execution. Specific or very complex, it is recommended to study to learn the book "The Art of Oracle Programming", this has relevant instructions.
If it is SQLServer, I am sorry, I do not know how to study. But one thing: when doing joins, if the secondary table (tbl2 above can be considered as a secondary table), then the performance will be much improved if you can reduce the amount of join data in large batches. A large amount of data not only takes up CPU processing time, but more importantly, has the potential to add a lot of IO operations, which can take a lot of time.