Stdoutput Warning: /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Stdoutput Please set $ACCUMULO_HOME to the root of your Accumulo installation. Exit code of the Shell command 1 <<< Invocation of Shell command completed <<< java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) Caused by: org.apache.oozie.action.hadoop.LauncherMainException at org.apache.oozie.action.hadoop.ShellMain.run(ShellMain.java:76) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:104) at org.apache.oozie.action.hadoop.ShellMain.main(ShellMain.java:63) ... 16 more Failing Oozie Launcher, Main Class [org.apache.oozie.action.hadoop.ShellMain], exit code [1] Oozie Launcher, uploading action data to HDFS sequence file: hdfs://nameservice0/user/admin/oozie-oozi/0000118-230724100647793-oozie-oozi-W/shell-af10--shell/action-data.seq Stopping AM Callback notification attempts left 0 Callback notification trying http://nn.hdfs.offline:11000/oozie/callback?id=0000118-230724100647793-oozie-oozi-W@shell-af10&status=FAILED Callback notification to http://nn.hdfs.offline:11000/oozie/callback?id=0000118-230724100647793-oozie-oozi-W@shell-af10&status=FAILED succeeded Callback notification succeeded
时间: 2023-11-03 13:04:29 浏览: 258
根据您提供的信息,看起来您在运行一个 Oozie 作业时遇到了一个问题。错误信息显示无法找到 Accumulo 目录,并建议您设置 $ACCUMULO_HOME 变量为 Accumulo 安装的根目录。此外,还有一些关于 Oozie Launcher 的错误信息和回调通知。
对于此问题,您可以尝试执行以下步骤来解决:
1. 确保在您的系统中正确安装了 Accumulo,并且 ACCUMULO_HOME 变量已经设置。您可以通过在终端中运行 `echo $ACCUMULO_HOME` 来验证是否设置了此变量。如果没有设置,请根据您的安装位置手动设置此变量。
2. 检查您的 Oozie 作业配置文件中是否正确设置了 Accumulo 相关的属性。特别是,检查是否指定了正确的 Accumulo 目录路径。
3. 如果您已经确认设置了正确的 ACCUMULO_HOME 和配置文件,并且问题仍然存在,请确保在运行 Oozie 作业之前已经正确设置了环境变量。您可以在 Oozie 作业的启动脚本中添加必要的环境变量设置。
如果以上步骤无法解决问题,请提供更多关于您的环境、Oozie 作业配置和其他相关信息,以便进一步帮助您解决此问题。
相关问题
ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.NullPointerException java.lang.NullPointerException at org.json.JSONObject.<init>(JSONObject.java:144) at org.apache.sqoop.util.SqoopJsonUtil.getJsonStringforMap(SqoopJsonUtil.java:43) at org.apache.sqoop.SqoopOptions.writeProperties(SqoopOptions.java:867) at org.apache.sqoop.mapreduce.JobBase.putSqoopOptionsToConfiguration(JobBase.java:393) at org.apache.sqoop.mapreduce.JobBase.createJob(JobBase.java:379) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:255) at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:747) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:536) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:633) at org.apache.sqoop.Sqoop.run(Sqoop.java:146) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:182) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:233) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:242) at org.apache.sqoop.Sqoop.main(Sqoop.java:251) Log Type: stdout Log Upload Time: Mon Jul 24 10:47:38 +0800 2023 Log Length: 74530 Showing 4096 bytes of 74530 total. Click here for the full log. 35517561_3806_01_000001: PRELAUNCH_OUT=/yarn/container-logs/application_1683335517561_3806/container_1683335517561_3806_01_000001/prelaunch.out: NM_AUX_SERVICE_mapreduce_shuffle=AAA0+gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=: NM_PORT=8041: HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-yarn: USER=admin: CLASSPATH=/yarn/nm/usercache/admin/appcache/application_1683335517561_3806/container_1683335517561_3806_01_000001:/yarn/nm/usercache/admin/appcache/application_1683335517561_3806/container_1683335517561_3806_01_000001/*:/etc/hadoop/conf.cloudera.yarn:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop/*:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop/lib/*:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-hdfs/*:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-yarn/*:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-yarn/lib/*:: PRELAUNCH_ERR=/yarn/container-logs/application_1683335517561_3806/container_1683335517561_3806_01_000001/prelaunch.err: HADOOP_TOKEN_FILE_LOCATION=/yarn/nm/usercache/admin/appcache/application_1683335517561_3806/container_1683335517561_3806_01_000001/container_tokens: LOCAL_USER_DIRS=/yarn/nm/usercache/admin/: OOZIE_ACTION_CONF_XML=/yarn/nm/usercache/admin/appcache/application_1683335517561_3806/container_1683335517561_3806_01_000001/action.xml: SHLVL=2: HOME=/home/: CONTAINER_ID=container_1683335517561_3806_01_000001: MALLOC_ARENA_MAX=4:怎么回事
根据您提供的日志,看起来您在运行 Sqoop 时遇到了一个 NullPointerException 异常。异常堆栈跟踪显示在 SqoopOptions 类的 writeProperties 方法中出现了空指针异常,而导致此异常的原因是在 SqoopJsonUtil 类中的 getJsonStringforMap 方法中使用了空的 JSONObject。
要解决这个问题,您可以尝试以下几个步骤:
1. 检查您的 Sqoop 配置文件中是否有任何错误或缺失的配置。确保所有必要的配置项都已正确设置,并且没有任何错误。
2. 检查您的输入数据是否有效,并且没有任何问题导致 Sqoop 无法解析或处理输入数据。确保您的输入数据符合 Sqoop 的要求,并且没有任何格式或内容上的问题。
3. 确保您正在使用的 Sqoop 版本与您的环境兼容,并且已正确安装和配置。检查 Sqoop 的版本和依赖项,确保它们与您当前的环境兼容。
4. 如果问题仍然存在,请尝试重新安装或更新 Sqoop,并确保使用最新版本。有时候,旧版本可能存在一些已知的问题或错误,通过更新或升级到最新版本可以解决这些问题。
如果上述步骤无法解决问题,请提供更多关于您的环境配置、Sqoop 版本、输入数据和任何其他相关信息,以便我们能够更好地帮助您解决此问题。
[root@hadoop102 cloudera]# /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hive/bin/schematool -initSchema -dbType mysql -verbose WARNING: Use "yarn jar" to launch YARN applications. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver Metastore connection User: APP Starting metastore schema initialization to 2.1.1-cdh6.3.2 Initialization script hive-schema-2.1.1.mysql.sql Connecting to jdbc:derby:;databaseName=metastore_db;create=true Connected to: Apache Derby (version 10.14.1.0 - (1808820)) Driver: Apache Derby Embedded JDBC Driver (version 10.14.1.0 - (1808820)) Transaction isolation: TRANSACTION_READ_COMMITTED 0: jdbc:derby:> !autocommit on Autocommit status: true 0: jdbc:derby:> /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */ Error: Syntax error: Encountered "<EOF>" at line 1, column 64. (state=42X01,code=30000) Closing: 0: jdbc:derby:;databaseName=metastore_db;create=true org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:568) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:541) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1137) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227) Caused by: java.io.IOException: Schema script failed, errorcode 2 at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:962) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:941) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:564) ... 8 more *** schemaTool failed ***
<think>嗯,用户在执行Hive的schematool初始化时遇到了错误。让我仔细看看这个错误信息。首先,用户运行的命令是使用MySQL作为元数据库,但日志里显示连接的却是Derby。这明显有问题,因为用户指定了-dbType mysql,但实际连接的URL是jdbc:derby,这说明配置可能有误,可能hive-site.xml中的元数据库配置没有正确指向MySQL,而仍然使用默认的Derby。
接下来看错误的具体信息,在初始化脚本时出现了语法错误,错误发生在执行SQL语句的时候。错误信息指出在设置字符集的时候遇到了问题,可能是因为Derby不支持MySQL特定的语法,比如/*!40101 SET...*/这样的注释语句。这说明schematool可能在错误地针对Derby执行了MySQL的脚本,这进一步验证了配置错误的问题,即虽然用户指定了dbType为mysql,但Hive仍然连接到了Derby。
另外,用户的环境中有SLF4J的多个绑定,这通常是警告,可能不会直接导致失败,但最好解决一下,避免潜在的冲突。不过当前的主要问题还是元数据库配置错误。
可能的解决步骤包括:首先检查hive-site.xml中的javax.jdo.option.ConnectionURL、ConnectionDriverName、用户名和密码是否正确指向MySQL实例,并确保MySQL服务已经启动,且用户有权限访问。然后,确认MySQL的JDBC驱动是否放到了Hive的lib目录下。最后,再次运行schematool命令,确保指定正确的数据库类型。如果之前尝试过初始化,可能需要清理Derby的metastore_db目录,或者手动删除MySQL中已有的Hive元数据库,再重新初始化。</think>这个错误提示表明在使用Hive的schematool初始化元数据库时遇到了问题。以下是关键点分析和解决建议:
**错误现象分析:**
1. **数据库类型混淆**:虽然指定了`-dbType mysql`,但日志显示连接的是Derby数据库
$$ \text{Metastore connection URL: jdbc:derby:;databaseName=metastore\_db;create=true} $$
这说明Hive配置未正确指向MySQL
2. **SQL语法错误**:
```bash
Error: Syntax error: Encountered "<EOF>" at line 1, column 64
```
这个错误发生在尝试执行MySQL特定的`/*!40101 SET...*/`语句时,Derby无法识别这种语法
**解决方案步骤:**
1. **检查Hive配置文件**
```bash
# 确认hive-site.xml中以下配置指向MySQL
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://your-mysql-host:3306/hive_metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
```
2. **准备MySQL环境**
```sql
-- 登录MySQL创建数据库和用户
CREATE DATABASE hive_metastore;
CREATE USER 'hive'@'%' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON hive_metastore.* TO 'hive'@'%';
FLUSH PRIVILEGES;
```
3. **安装MySQL JDBC驱动**
```bash
# 将mysql-connector-java.jar放入以下目录
/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hive/lib/
```
4. **清理旧数据(重要)**
```bash
rm -rf metastore_db
```
5. **重新执行初始化命令**
```bash
schematool -initSchema -dbType mysql -verbose
```
**附加说明:**
- SLF4J警告属于日志框架冲突,一般不会影响功能,可通过排除重复jar包解决
- 建议使用MySQL 5.7+版本,确保`lower_case_table_names`参数设置正确
- 如果之前初始化失败过,需要先手动删除MySQL中的hive_metastore数据库再重试
执行成功后应该会看到类似以下输出:
```bash
Initialization script completed
schemaTool completed
```
阅读全文
相关推荐















