org.apache.iceberg.hive.HiveTableOperations$WaitingForLockException: Waiting for lock.

博客内容涉及Hive表操作时遇到的等待锁异常,具体为`HiveTableOperations$WaitingForLockException`。问题源于尝试获取Hive表锁时超时,导致锁状态停留在WAITING,阻碍了其他请求。解决方法是查询并删除HiveMetaStore的HIVE_LOCKS表中对应锁定记录,从而释放锁。建议在操作后及时清理锁定以避免阻塞。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

报错信息

org.apache.iceberg.hive.HiveTableOperations$WaitingForLockException: Waiting for lock.
	at org.apache.iceberg.hive.HiveTableOperations.lambda$acquireLock$9(HiveTableOperations.java:444) ~[dw-0.1.jar:?]
	at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:405) ~[dw-0.1.jar:?]
	at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214) ~[dw-0.1.jar:?]
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198) ~[dw-0.1.jar:?]
	at org.apache.iceberg.hive.HiveTableOperations.acquireLock(HiveTableOperations.java:438) ~[dw-0.1.jar:?]
	at org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:217) ~[dw-0.1.jar:?]
	at org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:126) ~[dw-0.1.jar:?]
	at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:300) ~[dw-0.1.jar:?]
	at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:405) ~[dw-0.1.jar:?]
	at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214) ~[dw-0.1.jar:?]
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198) ~[dw-0.1.jar:?]
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:190) ~[dw-0.1.jar:?]
	at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:282) ~[dw-0.1.jar:?]
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitOperation(IcebergFilesCommitter.java:308) ~[dw-0.1.jar:?]
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitDeltaTxn(IcebergFilesCommitter.java:277) ~[dw-0.1.jar:?]
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitUpToCheckpoint(IcebergFilesCommitter.java:219) ~[dw-0.1.jar:?]
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.notifyCheckpointComplete(IcebergFilesCommitter.java:189) ~[dw-0.1.jar:?]
	at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.notifyCheckpointComplete(StreamOperatorWrapper.java:99) ~[flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointComplete(SubtaskCheckpointCoordinatorImpl.java:330) ~[flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:1092) ~[flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointCompleteAsync$11(StreamTask.java:1057) ~[flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$13(StreamTask.java:1080) ~[flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) [flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) [flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:317) [flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:189) [flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619) [flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583) [flink-dist_2.12-1.12.5.jar:1.12.5]
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) [flink-dist_2.12-1.12.5.jar:1.12.5]
  • 原因:

    当客户端调用锁时,hive MetastoreWAITING首先在 hive 表上创建一个具有状态的排他锁。如果表上没有其他锁,则 Metastore 将状态更改为ACQUIRED。否则,更新hl_blockedby_ext_id到最新的 lockId。无论是否获取到锁状态,锁信息都会如下存储在HIVE_LOCKS中。
    HiveMetaStoreClient.lock()将一个新的锁请求加入到 HMSHIVE_LOCKS表中,如果我们在超时时在此处抛出异常,我们的锁请求将被困在那里WAITING(可能会阻塞其他后续请求),除非我们调用unlock()

当达到超时时,清理过程将删除这些锁,但在此之前这可能会阻止其他锁请求。所以最好在可能的情况下清洁这些锁

所以需要我们找到HiveMetaStore中的HIVE_LOCKS表 将报错的表所对应的锁记录删除

select hl_lock_ext_id,hl_table,hl_lock_state,hl_lock_type,hl_last_heartbeat,hl_blockedby_ext_id from HIVE_LOCKS;

-- 然后
delete  from HIVE_LOCKS where hl_lock_ext_id=605 or hl_lock_ext_id=622;

在这里插入图片描述

最后问题解决

### 如何使用 `org.apache.iceberg.jdbc.JdbcCatalog` 进行表管理和数据操作 #### 设置 JDBC 目录作为会话的当前目录 为了设置 `JdbcCatalog` 作为当前会话的目录,可以通过调用 `tableEnv.useCatalog("mypg")` 方法完成此操作[^1]。 ```java // Java Example for setting JdbcCatalog as the current catalog of the session. TableEnvironment tableEnv = TableEnvironment.create(...); tableEnv.useCatalog("mypg"); ``` #### 初始化并配置 JdbcCatalog 在初始化 `JdbcCatalog` 之前,需要确保已正确设置了连接到目标数据库所需的属性。这通常涉及提供 JDBC URL、用户名和密码等必要参数。 ```java Properties properties = new Properties(); properties.setProperty("uri", "jdbc:postgresql://localhost:5432/mydb"); properties.setProperty("user", "myuser"); properties.setProperty("password", "mypassword"); JdbcCatalog jdbcCatalog = new JdbcCatalog( "mypg", "description of mypg catalog", properties, Thread.currentThread().getContextClassLoader() ); // Registering the catalog with Flink's Table Environment or Spark Session tableEnv.registerCatalog("mypg", jdbcCatalog); ``` #### 创建新表或加载现有表 一旦成功注册了 `JdbcCatalog`,就可以通过标准 API 调用来创建新的 Iceberg 表或将现有的关系型数据库中的表映射为 Iceberg 表结构。 ```sql CREATE TABLE IF NOT EXISTS mypg.db.tbl ( id BIGINT, data STRING ) USING iceberg; ``` 对于已经存在的 PostgreSQL 数据库表,则可以直接将其转换成 Iceberg 表: ```sql ALTER TABLE existing_table RENAME TO tbl IN NAMESPACE 'mypg.db'; ``` 需要注意的是,在实际应用过程中可能还需要额外处理一些细节问题,例如兼容性验证以及性能优化等方面的工作。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

问题不太大

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值