Home > Blockchain >  hive fetch result from hdfs is too slow, because of too many the map only task, How can I merge the
hive fetch result from hdfs is too slow, because of too many the map only task, How can I merge the

Time:05-20

hive query produces too many result files in the fold of "/tmp/hive/hive", Close to 4W tasks.But the total number of running results is only more than 100 so I wonder if there is a way to merge the results after query, reduce the number of result files, and improve the efficiency of pulling results?

Here is the explain of the query

 ---------------------------------------------------- -- 
|                      Explain                       |
 ---------------------------------------------------- -- 
| STAGE DEPENDENCIES:                                |
|   Stage-1 is a root stage                          |
|   Stage-0 depends on stages: Stage-1               |
|                                                    |
| STAGE PLANS:                                       |
|   Stage: Stage-1                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             alias: kafka_program_log             |
|             filterExpr: ((msg like '%disk loss%') and (ds > '2022-05-01')) (type: boolean) |
|             Statistics: Num rows: 36938084350 Data size: 11081425337136 Basic stats: PARTIAL Column stats: PARTIAL |
|             Filter Operator                        |
|               predicate: (msg like '%disk loss%') (type: boolean) |
|               Statistics: Num rows: 18469042175 Data size: 5540712668568 Basic stats: COMPLETE Column stats: PARTIAL |
|               Select Operator                      |
|                 expressions: server (type: string), msg (type: string), ts (type: string), ds (type: string), h (type: string) |
|                 outputColumnNames: _col0, _col1, _col2, _col3, _col4 |
|                 Statistics: Num rows: 18469042175 Data size: 5540712668568 Basic stats: COMPLETE Column stats: PARTIAL |
|                 File Output Operator               |
|                   compressed: false                |
|                   Statistics: Num rows: 18469042175 Data size: 5540712668568 Basic stats: COMPLETE Column stats: PARTIAL |
|                   table:                           |
|                       input format: org.apache.hadoop.mapred.TextInputFormat |
|                       output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat |
|                       serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|                                                    |
|   Stage: Stage-0                                   |
|     Fetch Operator                                 |
|       limit: -1                                    |
|       Processor Tree:                              |
|         ListSink                                   |
|                                                    |
 ---------------------------------------------------- -- 

CodePudding user response:

  1. Recreate the table using ORC/Parquet and you'll get much better performance. This is your number 1 priority for speeding things up.
  2. You are using a like operator that means scanning all the data. You may want to consider, re-writing it to use a join/where clause instead. This will run much faster. Here's an example of what you could do to make things better.
    with words as --short cut for readable sub-query
    (
      select 
        log.msg 
      from 
        kafka_program_log log 
      lateral view EXPLODE(split(msg, ' ')) words as word  -- for each word in msg, make a row assumes ' disk loss ' is what is in the msg
      where 
        word in ('disk', 'loss' ) -- filter the words to the ones we care about.
      and 
        ds > '2022-05-01' -- filter dates to the ones we care about.
      group by 
        log.msg -- gather the msgs together
      having 
        count(word) >= 2  -- only pull back msg that have at least two words we are interested in.
    ) -- end sub-query
      select 
        * 
      from kafka_program_log log
      inner join 
        words.msg = log.msg  // This join should really reduce the data we examine
      where 
        msg like "%disk loss%" -- like is fine now to make sure it's exactly what we're looking for.

CodePudding user response:

set mapred.max.split.size=2560000000;
Increase the size of the file processed by a single map, thereby reducing the number of maps

  • Related