orc-tools工具使用指南

  使用配置
内容纲要

概要描述


内部文档

本文主要介绍如何通过orc-tools工具解析orc文件,方便问题排查数据文件损坏之类的问题。

详细说明


可以通过-h查看jar包的使用帮助说明

[root@st01~]$ java -jar orc-tools-1.6.3-uber.jar -h
ORC Java Tools

usage: java -jar orc-tools-*.jar [--help] [--define X=Y]  

Commands:
   version - print the version of this ORC tool
   meta - print the metadata about the ORC file
   data - print the data from the ORC file
   scan - scan the ORC file
   convert - convert CSV and JSON files to ORC
   json-schema - scan JSON files to determine their schema
   key - print information about the keys

To get more help, provide -h to the command

通过version参数可以看到orc-tools的版本信息

[root@st01~]$ java -jar orc-tools-1.6.3-uber.jar version
ORC 1.6.3

下面列举几个常用的参数举例子介绍:

我们创建一张orc事务表,表名为emp

DROP TABLE IF EXISTS EMP;
CREATE TABLE EMP(
       EMPNO int,
       ENAME string,
       JOB string,
       MGR INT,
       HIREDATE DATE,
       SAL INT,
       COMM INT,
       DEPTNO INT
)CLUSTERED BY (empno) INTO 3 BUCKETS 
STORED AS ORC_TRANSACTION;

INSERT INTO EMP VALUES (7369,'SMITH','CLERK',7902,tdh_todate('17-12-1980','dd-mm-yyyy'),800,NULL,20);
INSERT INTO EMP VALUES (7499,'ALLEN','SALESMAN',7698,tdh_todate('20-2-1981','dd-mm-yyyy'),1600,300,30);
INSERT INTO EMP VALUES (7521,'WARD','SALESMAN',7698,tdh_todate('22-2-1981','dd-mm-yyyy'),1250,500,30);
INSERT INTO EMP VALUES (7566,'JONES','MANAGER',7839,tdh_todate('2-4-1981','dd-mm-yyyy'),2975,NULL,20);
INSERT INTO EMP VALUES (7654,'MARTIN','SALESMAN',7698,tdh_todate('28-9-1981','dd-mm-yyyy'),1250,1400,30);
INSERT INTO EMP VALUES (7698,'BLAKE','MANAGER',7839,tdh_todate('1-5-1981','dd-mm-yyyy'),2850,NULL,30);
INSERT INTO EMP VALUES (7782,'CLARK','MANAGER',7839,tdh_todate('9-6-1981','dd-mm-yyyy'),2450,NULL,10);
INSERT INTO EMP VALUES (7788,'SCOTT','ANALYST',7566,tdh_todate('13-4-1987','dd-mm-yyyy'),3000,NULL,20);
INSERT INTO EMP VALUES (7839,'KING','PRESIDENT',NULL,tdh_todate('17-11-1981','dd-mm-yyyy'),5000,NULL,10);
INSERT INTO EMP VALUES (7844,'TURNER','SALESMAN',7698,tdh_todate('8-9-1981','dd-mm-yyyy'),1500,0,30);
INSERT INTO EMP VALUES (7876,'ADAMS','CLERK',7788,tdh_todate('13-5-1987', 'dd-mm-yyyy'),1100,NULL,20);
INSERT INTO EMP VALUES (7900,'JAMES','CLERK',7698,tdh_todate('3-12-1981','dd-mm-yyyy'),950,NULL,30);
INSERT INTO EMP VALUES (7902,'FORD','ANALYST',7566,tdh_todate('3-12-1981','dd-mm-yyyy'),3000,NULL,20);
INSERT INTO EMP VALUES (7934,'MILLER','CLERK',7782,tdh_todate('23-1-1982','dd-mm-yyyy'),1300,NULL,10);

SELECT * FROM EMP;
SHOW CREATE TABLE emp;
dfs -ls hdfs://nameservice1/inceptor1/user/hive/warehouse/default.db/hive/emp
dfs -ls hdfs://nameservice1/inceptor1/user/hive/warehouse/default.db/hive/emp/base_0000079
-rwx--x--x   3 hive hive       1393 2022-11-24 15:55 hdfs://nameservice1/inceptor1/user/hive/warehouse/default.db/hive/emp/base_0000079/bucket_00000
-rwx--x--x   3 hive hive       1303 2022-11-24 15:55 hdfs://nameservice1/inceptor1/user/hive/warehouse/default.db/hive/emp/base_0000079/bucket_00001
-rwx--x--x   3 hive hive       1292 2022-11-24 15:55 hdfs://nameservice1/inceptor1/user/hive/warehouse/default.db/hive/emp/base_0000079/bucket_00002

将这3个bucket文件通过hdfs命令get到本地来解析(注意!不要使用getmerge这种合并文件的命令,会导致orc文件无法解析)

1、解析metadata

元信息结果较长,我们取bucket_00000的执行结果:

[root@st01~]$ java -jar orc-tools-1.6.3-uber.jar meta ./bucket_00000
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Processing data file bucket_00000 [length: 1393]
Structure for bucket_00000
File Version: 0.12 with HIVE_4243
Rows: 7
Compression: ZLIB
Compression size: 262144
Calendar: Julian/Gregorian
Type: struct>

Stripe Statistics:
  Stripe 1:
    Column 0: count: 7 hasNull: false
    Column 1: count: 7 hasNull: false min: 0 max: 0 sum: 0
    Column 2: count: 7 hasNull: false min: 68 max: 78 sum: 505
    Column 3: count: 7 hasNull: false min: 0 max: 0 sum: 0
    Column 4: count: 7 hasNull: false min: 0 max: 0 sum: 0
    Column 5: count: 7 hasNull: false min: 68 max: 78 sum: 505
    Column 6: count: 7 hasNull: false
    Column 7: count: 7 hasNull: false min: 7521 max: 7902 sum: 54096
    Column 8: count: 7 hasNull: false min: BLAKE max: WARD sum: 32
    Column 9: count: 7 hasNull: false min: ANALYST max: SALESMAN sum: 52
    Column 10: count: 6 hasNull: true min: 7566 max: 7839 sum: 46347
    Column 11: count: 7 hasNull: false min: 1981-01-01 max: 1987-01-13
    Column 12: count: 7 hasNull: false min: 1250 max: 5000 sum: 20525
    Column 13: count: 1 hasNull: true min: 500 max: 500 sum: 500
    Column 14: count: 7 hasNull: false min: 10 max: 30 sum: 140

File Statistics:
  Column 0: count: 7 hasNull: false
  Column 1: count: 7 hasNull: false min: 0 max: 0 sum: 0
  Column 2: count: 7 hasNull: false min: 68 max: 78 sum: 505
  Column 3: count: 7 hasNull: false min: 0 max: 0 sum: 0
  Column 4: count: 7 hasNull: false min: 0 max: 0 sum: 0
  Column 5: count: 7 hasNull: false min: 68 max: 78 sum: 505
  Column 6: count: 7 hasNull: false
  Column 7: count: 7 hasNull: false min: 7521 max: 7902 sum: 54096
  Column 8: count: 7 hasNull: false min: BLAKE max: WARD sum: 32
  Column 9: count: 7 hasNull: false min: ANALYST max: SALESMAN sum: 52
  Column 10: count: 6 hasNull: true min: 7566 max: 7839 sum: 46347
  Column 11: count: 7 hasNull: false min: 1981-01-01 max: 1987-01-13
  Column 12: count: 7 hasNull: false min: 1250 max: 5000 sum: 20525
  Column 13: count: 1 hasNull: true min: 500 max: 500 sum: 500
  Column 14: count: 7 hasNull: false min: 10 max: 30 sum: 140

Stripes:
  Stripe: offset: 3 data: 234 rows: 7 tail: 136 index: 385
    Stream: column 0 section ROW_INDEX start: 3 length 11
    Stream: column 1 section ROW_INDEX start: 14 length 24
    Stream: column 2 section ROW_INDEX start: 38 length 27
    Stream: column 3 section ROW_INDEX start: 65 length 24
    Stream: column 4 section ROW_INDEX start: 89 length 24
    Stream: column 5 section ROW_INDEX start: 113 length 27
    Stream: column 6 section ROW_INDEX start: 140 length 11
    Stream: column 7 section ROW_INDEX start: 151 length 28
    Stream: column 8 section ROW_INDEX start: 179 length 34
    Stream: column 9 section ROW_INDEX start: 213 length 39
    Stream: column 10 section ROW_INDEX start: 252 length 30
    Stream: column 11 section ROW_INDEX start: 282 length 24
    Stream: column 12 section ROW_INDEX start: 306 length 28
    Stream: column 13 section ROW_INDEX start: 334 length 29
    Stream: column 14 section ROW_INDEX start: 363 length 25
    Stream: column 1 section DATA start: 388 length 5
    Stream: column 2 section DATA start: 393 length 11
    Stream: column 3 section DATA start: 404 length 5
    Stream: column 4 section DATA start: 409 length 5
    Stream: column 5 section DATA start: 414 length 11
    Stream: column 7 section DATA start: 425 length 13
    Stream: column 8 section DATA start: 438 length 35
    Stream: column 8 section LENGTH start: 473 length 14
    Stream: column 9 section DATA start: 487 length 11
    Stream: column 9 section LENGTH start: 498 length 7
    Stream: column 9 section DICTIONARY_DATA start: 505 length 34
    Stream: column 10 section PRESENT start: 539 length 5
    Stream: column 10 section DATA start: 544 length 16
    Stream: column 11 section DATA start: 560 length 19
    Stream: column 12 section DATA start: 579 length 19
    Stream: column 13 section PRESENT start: 598 length 5
    Stream: column 13 section DATA start: 603 length 7
    Stream: column 14 section DATA start: 610 length 12
    Encoding column 0: DIRECT
    Encoding column 1: DIRECT_V2
    Encoding column 2: DIRECT_V2
    Encoding column 3: DIRECT_V2
    Encoding column 4: DIRECT_V2
    Encoding column 5: DIRECT_V2
    Encoding column 6: DIRECT
    Encoding column 7: DIRECT_V2
    Encoding column 8: DIRECT_V2
    Encoding column 9: DICTIONARY_V2[4]
    Encoding column 10: DIRECT_V2
    Encoding column 11: DIRECT_V2
    Encoding column 12: DIRECT_V2
    Encoding column 13: DIRECT_V2
    Encoding column 14: DIRECT_V2

File length: 1393 bytes
Padding length: 0 bytes
Padding ratio: 0%

User Metadata:
  hive.acid.key.index=
  hive.acid.stats=7,0,0
  orc.bloomfilter.columns=
  orc.column.index.concat.str=all
________________________________________________________________________________________________________________________

2、解析data

可以清晰的看到每个bucket文件内的数据内容

[root@st01~]$ java -jar orc-tools-1.6.3-uber.jar data ./bucket_00000
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Processing data file bucket_00000 [length: 1393]
{"operation":0,"originalTransaction":68,"bucket":0,"rowId":0,"currentTransaction":68,"row":{"empno":7521,"ename":"WARD","job":"SALESMAN","mgr":7698,"hiredate":"1981-01-22","sal":1250,"comm":500,"deptno":30}}
{"operation":0,"originalTransaction":69,"bucket":0,"rowId":0,"currentTransaction":69,"row":{"empno":7566,"ename":"JONES","job":"MANAGER","mgr":7839,"hiredate":"1981-01-02","sal":2975,"comm":null,"deptno":20}}
{"operation":0,"originalTransaction":71,"bucket":0,"rowId":0,"currentTransaction":71,"row":{"empno":7698,"ename":"BLAKE","job":"MANAGER","mgr":7839,"hiredate":"1981-01-01","sal":2850,"comm":null,"deptno":30}}
{"operation":0,"originalTransaction":72,"bucket":0,"rowId":0,"currentTransaction":72,"row":{"empno":7782,"ename":"CLARK","job":"MANAGER","mgr":7839,"hiredate":"1981-01-09","sal":2450,"comm":null,"deptno":10}}
{"operation":0,"originalTransaction":73,"bucket":0,"rowId":0,"currentTransaction":73,"row":{"empno":7788,"ename":"SCOTT","job":"ANALYST","mgr":7566,"hiredate":"1987-01-13","sal":3000,"comm":null,"deptno":20}}
{"operation":0,"originalTransaction":74,"bucket":0,"rowId":0,"currentTransaction":74,"row":{"empno":7839,"ename":"KING","job":"PRESIDENT","mgr":null,"hiredate":"1981-01-17","sal":5000,"comm":null,"deptno":10}}
{"operation":0,"originalTransaction":78,"bucket":0,"rowId":0,"currentTransaction":78,"row":{"empno":7902,"ename":"FORD","job":"ANALYST","mgr":7566,"hiredate":"1981-01-03","sal":3000,"comm":null,"deptno":20}}
________________________________________________________________________________________________________________________
[root@st01~]$ java -jar orc-tools-1.6.3-uber.jar data ./bucket_00001
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Processing data file bucket_00001 [length: 1303]
{"operation":0,"originalTransaction":66,"bucket":1,"rowId":0,"currentTransaction":66,"row":{"empno":7369,"ename":"SMITH","job":"CLERK","mgr":7902,"hiredate":"1980-01-17","sal":800,"comm":null,"deptno":20}}
{"operation":0,"originalTransaction":70,"bucket":1,"rowId":0,"currentTransaction":70,"row":{"empno":7654,"ename":"MARTIN","job":"SALESMAN","mgr":7698,"hiredate":"1981-01-28","sal":1250,"comm":1400,"deptno":30}}
{"operation":0,"originalTransaction":76,"bucket":1,"rowId":0,"currentTransaction":76,"row":{"empno":7876,"ename":"ADAMS","job":"CLERK","mgr":7788,"hiredate":"1987-01-13","sal":1100,"comm":null,"deptno":20}}
{"operation":0,"originalTransaction":77,"bucket":1,"rowId":0,"currentTransaction":77,"row":{"empno":7900,"ename":"JAMES","job":"CLERK","mgr":7698,"hiredate":"1981-01-03","sal":950,"comm":null,"deptno":30}}
________________________________________________________________________________________________________________________

[root@st01~]$ java -jar orc-tools-1.6.3-uber.jar data ./bucket_00002
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Processing data file bucket_00002 [length: 1292]
{"operation":0,"originalTransaction":67,"bucket":2,"rowId":0,"currentTransaction":67,"row":{"empno":7499,"ename":"ALLEN","job":"SALESMAN","mgr":7698,"hiredate":"1981-01-20","sal":1600,"comm":300,"deptno":30}}
{"operation":0,"originalTransaction":75,"bucket":2,"rowId":0,"currentTransaction":75,"row":{"empno":7844,"ename":"TURNER","job":"SALESMAN","mgr":7698,"hiredate":"1981-01-08","sal":1500,"comm":0,"deptno":30}}
{"operation":0,"originalTransaction":79,"bucket":2,"rowId":0,"currentTransaction":79,"row":{"empno":7934,"ename":"MILLER","job":"CLERK","mgr":7782,"hiredate":"1982-01-23","sal":1300,"comm":null,"deptno":10}}
________________________________________________________________________________________________________________________

其他参数使用较少,可以自行测试。

FAQ

1、执行报错org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, OAUTHBEARER, KERBEROS]

不建议直接读取hdfs路径的数据文件,建议将数据文件get到本地直接读取。

2、执行报错 Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/hive/common/io/DiskRange : Unsupported major.minor version 52.0

需要使用jdk1.8 来运行

这篇文章对您有帮助吗?

平均评分 0 / 5. 次数: 0

尚无评价,您可以第一个评哦!

非常抱歉,这篇文章对您没有帮助.

烦请您告诉我们您的建议与意见,以便我们改进,谢谢您。