活动介绍

if(params.get("orderBy")==null || params.get("orderBy")==""){ params.put("orderBy","id"); }这句代码在@RequestMapping("/page") public R page(@RequestParam Map<String, Object> params, HttpServletRequest request){ logger.debug("page方法:,,Controller:{},,params:{}",this.getClass().getName(),JSONObject.toJSONString(params)); if(params.get("orderBy")==null || params.get("orderBy")==""){ params.put("orderBy","id"); } PageUtils page = banjiService.queryPage(params); //字典表数据转换 List<BanjiView> list =(List<BanjiView>)page.getList(); for(BanjiView c:list){ //修改对应字典表字段 dictionaryService.dictionaryConvert(c, request); } return R.ok().put("data", page); }这段代码里的作用是什么

时间: 2024-02-22 15:01:18 浏览: 127
这段代码的作用是:如果查询参数中没有指定排序方式,则默认按照ID升序排序,然后调用班级服务的`queryPage`方法查询班级信息,并把查询结果进行字典表数据转换后返回。 具体来说,如果`params`中没有`orderBy`参数或者`orderBy`参数的值为空字符串,则将`orderBy`参数设置为`id`。然后调用班级服务的`queryPage`方法获取班级信息的分页数据(根据查询参数中的条件进行查询和排序),并将查询结果封装进`PageUtils`对象中。最后,将查询结果列表中的每个班级对象通过调用`dictionaryService.dictionaryConvert`方法进行字典表数据转换,即将字典表中对应的值替换到班级对象中的相应字段中,最终将带有字典表数据的分页数据返回。
相关问题

if(params.get("orderBy")==null || params.get("orderBy")==""){ params.put("orderBy","id"); }这段代码的意义在哪里

这段代码的意义是给params参数中的orderBy字段设置默认值为"id"。具体来说,如果params参数中不存在orderBy字段或者orderBy字段的值为空字符串,那么就将orderBy字段的值设置为"id"。这样可以保证orderBy字段的值不为空,避免了一些潜在的问题,比如SQL语句中的orderBy子句必须指定一个字段。

if(false) return R.error(511,"永不会进入"); else if("用户".equals(role)) params.put("yonghuId",request.getSession().getAttribute("userId")); else if("医生".equals(role)) params.put("yishengId",request.getSession().getAttribute("userId")); if(params.get("orderBy")==null || params.get("orderBy")==""){ params.put("orderBy","id"); }

这段代码是一个Java中的条件语句,根据不同的情况设置不同的参数。 首先判断条件为false,因此代码不会执行第一行中的语句,并且不会返回错误码511。 接下来,根据角色的不同,将不同的参数添加到params中。如果角色为"用户",则将session中的"userId"作为"yonghuId"添加到params中;如果角色为"医生",则将session中的"userId"作为"yishengId"添加到params中。 最后,如果params中没有"orderBy"参数或者"orderBy"参数值为null或者空字符串,则将"orderBy"参数设置为"id"。
阅读全文

相关推荐

解释一下这段代码:public R autoSort2(@RequestParam Map<String, Object> params,CaipinxinxiEntity caipinxinxi, HttpServletRequest request){ String userId = request.getSession().getAttribute("userId").toString(); String goodtypeColumn = "caipinfenlei"; List<OrdersEntity> orders = ordersService.selectList(new EntityWrapper<OrdersEntity>().eq("userid", userId).eq("tablename", "caipinxinxi").orderBy("addtime", false)); List<String> goodtypes = new ArrayList<String>(); Integer limit = params.get("limit")==null?10:Integer.parseInt(params.get("limit").toString()); List<CaipinxinxiEntity> caipinxinxiList = new ArrayList<CaipinxinxiEntity>();//去重 List<OrdersEntity> ordersDist = new ArrayList<OrdersEntity>(); for(OrdersEntity o1 : orders) { boolean addFlag = true; for(OrdersEntity o2 : ordersDist) { if(o1.getGoodid()==o2.getGoodid() || o1.getGoodtype().equals(o2.getGoodtype())) { addFlag = false; break; } } if(addFlag) ordersDist.add(o1); } if(ordersDist!=null && ordersDist.size()>0) { for(OrdersEntity o : ordersDist) { caipinxinxiList.addAll(caipinxinxiService.selectList(new EntityWrapper<CaipinxinxiEntity>().eq(goodtypeColumn, o.getGoodtype()))); } } EntityWrapper<CaipinxinxiEntity> ew = new EntityWrapper<CaipinxinxiEntity>(); params.put("sort", "id"); params.put("order", "desc"); //调用caipinxinxi对象的queryPage方法 PageUtils page = caipinxinxiService.queryPage(params, MPUtil.sort(MPUtil.between(MPUtil.likeOrEq(ew, caipinxinxi), params), params)); List<CaipinxinxiEntity> pageList = (List<CaipinxinxiEntity>)page.getList(); if(caipinxinxiList.size()<limit) { int toAddNum = (limit-caipinxinxiList.size())<=pageList.size()?(limit-caipinxinxiList.size()):pageList.size(); for(CaipinxinxiEntity o1 : pageList) { boolean addFlag = true; for(CaipinxinxiEntity o2 : caipinxinxiList) { if(o1.getId().intValue()==o2.getId().intValue()) { addFlag = false; break; } } if(addFlag) { caipinxinxiList.add(o1); if(--toAddNum==0) break; } } } page.setList(caipinxinxiList); return R.ok().put("data", page); }

/** * 协同算法(按收藏推荐) */ @RequestMapping("/autoSort2") public R autoSort2(@RequestParam Map<String, Object> params,NewsEntity news, HttpServletRequest request){ String userId = request.getSession().getAttribute("userId").toString(); String inteltypeColumn = "typename"; List<StoreupEntity> storeups = storeupService.selectList(new EntityWrapper<StoreupEntity>().eq("type", 1).eq("userid", userId).eq("tablename", "news").orderBy("addtime", false)); List<String> inteltypes = new ArrayList<String>(); Integer limit = params.get("limit")==null?10:Integer.parseInt(params.get("limit").toString()); List<NewsEntity> newsList = new ArrayList<NewsEntity>(); //去重 if(storeups!=null && storeups.size()>0) { List<String> typeList = new ArrayList<String>(); for(StoreupEntity s : storeups) { if(typeList.contains(s.getInteltype())) continue; typeList.add(s.getInteltype()); newsList.addAll(newsService.selectList(new EntityWrapper<NewsEntity>().eq(inteltypeColumn, s.getInteltype()))); } } EntityWrapper<NewsEntity> ew = new EntityWrapper<NewsEntity>(); params.put("sort", "id"); params.put("order", "desc"); PageUtils page = newsService.queryPage(params, MPUtil.sort(MPUtil.between(MPUtil.likeOrEq(ew, news), params), params)); List<NewsEntity> pageList = (List<NewsEntity>)page.getList(); if(newsList.size()limit) { newsList = newsList.subList(0, limit); } page.setList(newsList); return R.ok().put("data", page); }根据上方文件,帮我仔细说明该文件运用的协同过滤算法的具体效果说明

const express = require('express'); const cors = require('cors'); const sqlite3 = require('sqlite3').verbose(); const path = require('path'); // 创建服务器 const app = express(); app.use(express.json()); // 修复CORS配置 - 添加详细配置 app.use(cors({ origin: '*', methods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'], allowedHeaders: ['Content-Type', 'Authorization'], credentials: true })); // 处理预检请求 app.options('*', cors()); // 连接SQLite数据库 const db = new sqlite3.Database('./archive.db', (err) => { if (err) { console.error('数据库连接错误:', err.message); } else { console.log('成功连接SQLite数据库'); // 初始化档案表 db.run(CREATE TABLE IF NOT EXISTS archives ( id INTEGER PRIMARY KEY AUTOINCREMENT, fileNumber TEXT NOT NULL, documentNumber TEXT, responsiblePerson TEXT, title TEXT NOT NULL, date TEXT, projectDate TEXT, securityLevel TEXT, pages INTEGER, retentionPeriod TEXT, carrierForm TEXT, notes TEXT, type TEXT NOT NULL, createdAt TIMESTAMP DEFAULT CURRENT_TIMESTAMP, UNIQUE(fileNumber, type) )); } }); // 获取档案数据 (添加错误处理) app.get('/api/archives', (req, res) => { const type = req.query.type || 'document'; db.all('SELECT * FROM archives WHERE type = ? ORDER BY createdAt DESC', [type], (err, rows) => { if (err) { console.error('数据库查询错误:', err); res.status(500).json({ error: '数据库查询失败' }); } else { res.json(rows); } }); }); // 添加新档案 app.post('/api/archives', (req, res) => { const { fileNumber, documentNumber, responsiblePerson, title, date, projectDate, securityLevel, pages, retentionPeriod, carrierForm, notes, type } = req.body; // 验证必填字段 if (!fileNumber || !title || !type) { return res.status(400).json({ error: '档号、题名和类型为必填项' }); } const sql = INSERT INTO archives ( fileNumber, documentNumber, responsiblePerson, title, date, projectDate, securityLevel, pages, retentionPeriod, carrierForm, notes, type ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ; const values = [ fileNumber, documentNumber || null, responsiblePerson || null, title, date || null, projectDate || null, securityLevel || null, pages ? parseInt(pages) : null, retentionPeriod || null, carrierForm || null, notes || null, type ]; db.run(sql, values, function(err) { if (err) { // 处理唯一键冲突(档号重复) if (err.code === 'SQLITE_CONSTRAINT') { return res.status(409).json({ error: '该档号已存在,请使用其他档号' }); } return res.status(500).json({ error: err.message }); } res.status(201).json({ id: this.lastID, ...req.body }); }); }); // 更新档案 app.put('/api/archives/:id', (req, res) => { const id = req.params.id; const { fileNumber, documentNumber, responsiblePerson, title, date, projectDate, securityLevel, pages, retentionPeriod, carrierForm, notes, type } = req.body; // 验证必填字段 if (!fileNumber || !title || !type) { return res.status(400).json({ error: '档号、题名和类型为必填项' }); } const sql = UPDATE archives SET fileNumber = ?, documentNumber = ?, responsiblePerson = ?, title = ?, date = ?, projectDate = ?, securityLevel = ?, pages = ?, retentionPeriod = ?, carrierForm = ?, notes = ?, type = ? WHERE id = ? ; const values = [ fileNumber, documentNumber || null, responsiblePerson || null, title, date || null, projectDate || null, securityLevel || null, pages ? parseInt(pages) : null, retentionPeriod || null, carrierForm || null, notes || null, type, id ]; db.run(sql, values, function(err) { if (err) { // 处理唯一键冲突(档号重复) if (err.code === 'SQLITE_CONSTRAINT') { return res.status(409).json({ error: '该档号已存在,请使用其他档号' }); } return res.status(500).json({ error: err.message }); } if (this.changes === 0) { return res.status(404).json({ error: '档案未找到' }); } res.json({ id: id, ...req.body }); }); }); // 删除档案 app.delete('/api/archives/:id', (req, res) => { const id = req.params.id; db.run('DELETE FROM archives WHERE id = ?', [id], function(err) { if (err) { return res.status(500).json({ error: err.message }); } if (this.changes === 0) { return res.status(404).json({ error: '档案未找到' }); } res.json({ success: true }); }); }); // 批量导入档案 app.post('/api/archives/batch-import', (req, res) => { const data = req.body.data || []; if (data.length === 0) { return res.status(400).json({ error: '没有提供导入数据' }); } let insertedCount = 0; let skippedCount = 0; let processed = 0; // 开始事务 db.serialize(() => { db.run('BEGIN TRANSACTION'); data.forEach(item => { const { fileNumber, documentNumber, responsiblePerson, title, date, projectDate, securityLevel, pages, retentionPeriod, carrierForm, notes, type } = item; // 检查是否已存在相同档号和类型的档案 db.get('SELECT id FROM archives WHERE fileNumber = ? AND type = ?', [fileNumber, type], (err, row) => { if (err) { // 错误处理 return; } if (row) { // 跳过已存在的档案 skippedCount++; checkComplete(); return; } // 插入新档案 const sql = INSERT INTO archives ( fileNumber, documentNumber, responsiblePerson, title, date, projectDate, securityLevel, pages, retentionPeriod, carrierForm, notes, type ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ; const values = [ fileNumber, documentNumber || null, responsiblePerson || null, title, date || null, projectDate || null, securityLevel || null, pages ? parseInt(pages) : null, retentionPeriod || null, carrierForm || null, notes || null, type ]; db.run(sql, values, function(err) { if (!err) { insertedCount++; } checkComplete(); }); }); }); function checkComplete() { processed++; if (processed === data.length) { db.run('COMMIT', (err) => { if (err) { return res.status(500).json({ error: '导入失败' }); } res.json({ insertedCount, skippedCount }); }); } } }); }); // 启动服务器 const port = 3000; app.listen(port, () => { console.log(服务器运行在 http://localhost:${port}); }); 运行这个js文件报错,throw new TypeError(Missing parameter name at ${i}: ${DEBUG_URL});

报错Caused By: javax.faces.view.facelets.FaceletException: 检验数据查询错误,{endDate=2025-07-24, notLikeKey=T11, startDate=2025-07-18},EJBException: You have attempted to set a parameter value using a name of locator that does not exist in the query string SELECT c.id, c.orgCode, c.locator, c.empNo, c.empName, c.itemCode, c.itemDesc, c.useableQty, c.unUseableQty, c.demageQty, c.offQty, c.signNo, c.checkoutEmpNo, c.checkoutEmpName, c.checkOutDate, c.finishedEmpNo, c.finishedEmpName, c.finishedDate, c.payMoney, c.overTime, MAX(n.oldToolGatherNo), c.requestNo, c.erpState FROM ToolCheckEntity c LEFT JOIN ToolNoteRequestEntity n ON n.requestNo = c.requestNo WHERE c.state='已完成' AND c.locator not like :notLikeKey AND (c.checkOutDate BETWEEN :startDate AND :endDate) group by c.id, c.orgCode, c.locator, c.empNo, c.empName,, // 构建动态查询语句 private String buildDynamicQuery(Map<String, String> queryMap, Map<String, String> sortMap, boolean isCountQuery) { StringBuilder sqlBuilder = new StringBuilder(); if (isCountQuery) { // sqlBuilder.append("SELECT COUNT(c) "); sqlBuilder.append("SELECT COUNT(DISTINCT c.id) FROM ToolCheckEntity c "); } else { sqlBuilder.append("SELECT c.id, c.orgCode, c.locator, c.empNo, c.empName, "); sqlBuilder.append("c.itemCode, c.itemDesc, c.useableQty, c.unUseableQty, c.demageQty, "); sqlBuilder.append("c.offQty, c.signNo, c.checkoutEmpNo, c.checkoutEmpName, c.checkOutDate, "); sqlBuilder.append("c.finishedEmpNo, c.finishedEmpName, c.finishedDate, c.payMoney, c.overTime, "); sqlBuilder.append("MAX(n.oldToolGatherNo), c.requestNo, c.erpState "); sqlBuilder.append("FROM ToolCheckEntity c LEFT JOIN ToolNoteRequestEntity n ON n.requestNo = c.requestNo "); } // StringBuilder sqlBuilder = new StringBuilder(); // sqlBuilder.append("SELECT c.id, c.orgCode, c.locator, c.empNo, c.empName, "); // sqlBuilder.append("c.itemCode, c.itemDesc, c.useableQty, c.unUseableQty, c.demageQty, "); // sqlBuilder.append("c.offQty, c.signNo, c.checkoutEmpNo, c.checkoutEmpName, c.checkOutDate, "); // sqlBuilder.append("c.finishedEmpNo, c.finishedEmpName, c.finishedDate, c.payMoney, c.overTime, "); // sqlBuilder.append("n.oldToolGatherNo, c.requestNo, c.erpState "); // sqlBuilder.append("FROM ToolCheckEntity c LEFT JOIN ToolNoteRequestEntity n ON n.requestNo = c.requestNo "); sqlBuilder.append("WHERE "); sqlBuilder.append(" c.state='已完成' "); // 添加WHERE条件 if (queryMap != null && !queryMap.isEmpty()) { // boolean firstCondition = true; // 处理orgCode条件 if (queryMap.containsKey("orgCode") && StringUtils.isNotBlank(queryMap.get("orgCode"))) { sqlBuilder.append(" AND c.orgCode = :orgCode "); } // 处理locator条件 if (queryMap.containsKey("locator") && StringUtils.isNotBlank(queryMap.get("locator"))) { sqlBuilder.append(" AND c.locator = :locator "); } // 处理empNo条件 if (queryMap.containsKey("empNo") && StringUtils.isNotBlank(queryMap.get("empNo"))) { sqlBuilder.append(" AND c.empNo = :empNo "); } if (queryMap.containsKey("itemCode") && StringUtils.isNotBlank(queryMap.get("itemCode"))) { sqlBuilder.append(" AND c.itemCode like :itemCode "); } if (queryMap.containsKey("notLikeKey") && StringUtils.isNotBlank(queryMap.get("notLikeKey"))) { sqlBuilder.append(" AND c.locator not like :notLikeKey "); } if (queryMap.containsKey("startDate") && StringUtils.isNotBlank(queryMap.get("startDate"))&& queryMap.containsKey("endDate") && StringUtils.isNotBlank(queryMap.get("endDate")) ){ // 添加日期范围条件 - 根据业务需求选择合适的日期字段 sqlBuilder.append(" AND (c.checkOutDate BETWEEN :startDate AND :endDate) "); } } if (!isCountQuery) { String groupByStr=" group by c.id, c.orgCode, c.locator, c.empNo, c.empName, \n" + " c.itemCode, c.itemDesc, c.useableQty, c.unUseableQty, \n" + " c.demageQty, c.offQty, c.signNo, c.checkoutEmpNo, \n" + " c.checkoutEmpName, c.checkOutDate, c.finishedEmpNo, \n" + " c.finishedEmpName, c.finishedDate, c.payMoney, \n" + " c.overTime, c.requestNo, c.erpState"; sqlBuilder.append(groupByStr); String orderBy=" order by c.checkOutDate desc"; sqlBuilder.append(orderBy); } return sqlBuilder.toString(); } // 设置查询参数 private void setQueryParameters(Query query, Map<String, String> queryMap) { if (queryMap == null) return; if (queryMap.containsKey("orgCode") && StringUtils.isNotBlank(queryMap.get("orgCode"))) { query.setParameter("orgCode", queryMap.get("orgCode")); } if (queryMap.containsKey("locator") && StringUtils.isNotBlank(queryMap.get("locator"))) { query.setParameter("locator", queryMap.get("locator")); } if (queryMap.containsKey("empNo") && StringUtils.isNotBlank(queryMap.get("empNo"))) { query.setParameter("empNo", queryMap.get("empNo")); } if (queryMap.containsKey("itemCode") && StringUtils.isNotBlank(queryMap.get("itemCode"))) { query.setParameter("itemCode", "%"+queryMap.get("itemCode")+"%"); } if (queryMap.containsKey("notLikeKey") && StringUtils.isNotBlank(queryMap.get("notLikeKey"))) { query.setParameter("locator", queryMap.get("notLikeKey")+"%"); } if (queryMap.containsKey("startDate") && StringUtils.isNotBlank(queryMap.get("startDate"))&& queryMap.containsKey("endDate") && StringUtils.isNotBlank(queryMap.get("endDate")) ) { try { SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd hh:mm"); // 定义日期格式 Date startDate = sdf.parse(queryMap.get("startDate").toString()+" 00:00"); Date endDate = sdf.parse(queryMap.get("endDate").toString()+" 23:59"); long betweenDays = (endDate.getTime() - startDate.getTime()) / 24 / 60 / 60 / 1000; if (betweenDays > 7) { throw new EJBException("查询时间跨度不能超过7天"); }else { // 设置查询参数 query.setParameter("startDate", startDate); query.setParameter("endDate", endDate); } } catch (ParseException e) { e.printStackTrace(); } } }

帮我补充时间段查询的空缺代码, private String buildDynamicQuery(Map<String, String> queryMap, Map<String, String> sortMap) { StringBuilder sqlBuilder = new StringBuilder(); sqlBuilder.append("SELECT c.id, c.orgCode, c.locator, c.empNo, c.empName, "); sqlBuilder.append("c.itemCode, c.itemDesc, c.useableQty, c.unUseableQty, c.demageQty, "); sqlBuilder.append("c.offQty, c.signNo, c.checkoutEmpNo, c.checkoutEmpName, c.checkOutDate, "); sqlBuilder.append("c.finishedEmpNo, c.finishedEmpName, c.finishedDate, c.payMoney, c.overTime, "); sqlBuilder.append("n.oldToolGatherNo, c.requestNo, c.erpState "); sqlBuilder.append("FROM ToolCheckEntity c LEFT JOIN ToolNoteRequestEntity n ON n.requestNo = c.requestNo "); // 添加WHERE条件 if (queryMap != null && !queryMap.isEmpty()) { sqlBuilder.append("WHERE "); boolean firstCondition = true; // 处理orgCode条件 if (queryMap.containsKey("orgCode") && StringUtils.isNotBlank(queryMap.get("orgCode"))) { sqlBuilder.append("c.orgCode = :orgCode "); firstCondition = false; } // 处理locator条件 if (queryMap.containsKey("locator") && StringUtils.isNotBlank(queryMap.get("locator"))) { if (!firstCondition) sqlBuilder.append("AND "); sqlBuilder.append("c.locator = :locator "); firstCondition = false; } // 处理empNo条件 if (queryMap.containsKey("empNo") && StringUtils.isNotBlank(queryMap.get("empNo"))) { if (!firstCondition) sqlBuilder.append("AND "); sqlBuilder.append("c.empNo = :empNo "); firstCondition = false; } if (queryMap.containsKey("itemCode") && StringUtils.isNotBlank(queryMap.get("itemCode"))) { if (!firstCondition) sqlBuilder.append("AND "); sqlBuilder.append("c.itemCode = :itemCode "); firstCondition = false; } if (queryMap.containsKey("startDate") && StringUtils.isNotBlank(queryMap.get("startDate"))&& queryMap.containsKey("endDate") && StringUtils.isNotBlank(queryMap.get("endDate")) ){ } // 如果没有有效条件,移除WHERE关键字 if (firstCondition) { sqlBuilder.setLength(sqlBuilder.length() - 6); // 移除"WHERE " } } return sqlBuilder.toString(); } // 设置查询参数 private void setQueryParameters(Query query, Map<String, String> queryMap) { if (queryMap == null) return; if (queryMap.containsKey("orgCode") && StringUtils.isNotBlank(queryMap.get("orgCode"))) { query.setParameter("orgCode", queryMap.get("orgCode")); } if (queryMap.containsKey("locator") && StringUtils.isNotBlank(queryMap.get("locator"))) { query.setParameter("locator", queryMap.get("locator")); } if (queryMap.containsKey("empNo") && StringUtils.isNotBlank(queryMap.get("empNo"))) { query.setParameter("empNo", queryMap.get("empNo")); } if (queryMap.containsKey("itemCode") && StringUtils.isNotBlank(queryMap.get("itemCode"))) { query.setParameter("itemCode", queryMap.get("itemCode")); } if (queryMap.containsKey("startDate") && StringUtils.isNotBlank(queryMap.get("startDate"))&& queryMap.containsKey("endDate") && StringUtils.isNotBlank(queryMap.get("endDate")) ) { try { SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd hh:mm"); // 定义日期格式 Date startDate = sdf.parse(queryMap.get("startDate").toString()+" 00:00"); Date endDate = sdf.parse(queryMap.get("endDate").toString()+" 23:59"); long betweenDays = (endDate.getTime() - startDate.getTime()) / 24 / 60 / 60 / 1000; if (betweenDays > 7) { throw new EJBException("查询时间跨度不能超过7天"); }else { query.setParameter(); } } catch (ParseException e) { e.printStackTrace(); } } }

[root@node ~]# start-dfs.sh Starting namenodes on [node] Last login: 二 7月 8 16:00:18 CST 2025 from 192.168.1.92 on pts/0 Starting datanodes Last login: 二 7月 8 16:00:38 CST 2025 on pts/0 Starting secondary namenodes [node] Last login: 二 7月 8 16:00:41 CST 2025 on pts/0 SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [root@node ~]# start-yarn.sh Starting resourcemanager Last login: 二 7月 8 16:00:45 CST 2025 on pts/0 Starting nodemanagers Last login: 二 7月 8 16:00:51 CST 2025 on pts/0 [root@node ~]# mapred --daemon start historyserver [root@node ~]# jps 3541 ResourceManager 4007 Jps 2984 NameNode 3944 JobHistoryServer 3274 SecondaryNameNode [root@node ~]# mkdir -p /weblog [root@node ~]# cat > /weblog/access.log << EOF > 192.168.1.1,2023-06-01 10:30:22,/index.html > 192.168.1.2,2023-06-01 10:31:15,/product.html > 192.168.1.1,2023-06-01 10:32:45,/cart.html > 192.168.1.3,2023-06-01 11:45:30,/checkout.html > 192.168.1.4,2023-06-01 12:10:05,/index.html > 192.168.1.2,2023-06-01 14:20:18,/product.htm > EOF [root@node ~]# ls /weblog access.log [root@node ~]# hdfs dfs -mkdir -p /weblog/raw SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [root@node ~]# hdfs dfs -put /weblog/access.log /weblog/raw/ SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [root@node ~]# hdfs dfs -ls /weblog/raw SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Found 1 items -rw-r--r-- 3 root supergroup 269 2025-07-08 16:03 /weblog/raw/access.log [root@node ~]# cd /weblog [root@node weblog]# mkdir weblog-mapreduce [root@node weblog]# cd weblog-mapreduce [root@node weblog-mapreduce]# touch CleanMapper.java [root@node weblog-mapreduce]# vim CleanMapper.java import java.io.IOException; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.*; public class CleanMapper extends Mapper<LongWritable, Text, Text, NullWritable> { public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); String[] fields = line.split(","); if(fields.length == 3) { String ip = fields[0]; String time = fields[1]; String page = fields[2]; if(ip.matches("\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}")) { String outputLine = ip + "," + time + "," + page; context.write(new Text(outputLine), NullWritable.get()); } } } } [root@node weblog-mapreduce]# touch CleanReducer.java [root@node weblog-mapreduce]# vim CleanReducer.java import java.io.IOException; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.*; public class CleanReducer extends Reducer<Text, NullWritable, Text, NullWritable> { public void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException { context.write(key, NullWritable.get()); } } [root@node weblog-mapreduce]# touch LogCleanDriver.java [root@node weblog-mapreduce]# vim LogCleanDriver.java import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.*; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class LogCleanDriver { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "Web Log Cleaner"); job.setJarByClass(LogCleanDriver.class); job.setMapperClass(CleanMapper.class); job.setReducerClass(CleanReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(NullWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } [root@node weblog-mapreduce]# ls /weblog/weblog-mapreduce CleanMapper.java CleanReducer.java LogCleanDriver.java [root@node weblog-mapreduce]# javac -classpath $(hadoop classpath) -d . *.java [root@node weblog-mapreduce]# ls /weblog/weblog-mapreduce CleanMapper.class CleanReducer.class LogCleanDriver.class CleanMapper.java CleanReducer.java LogCleanDriver.java [root@node weblog-mapreduce]# jar cf logclean.jar *.class [root@node weblog-mapreduce]# ls /weblog/weblog-mapreduce CleanMapper.class CleanReducer.class LogCleanDriver.class logclean.jar CleanMapper.java CleanReducer.java LogCleanDriver.java [root@node weblog-mapreduce]# hdfs dfs -ls /weblog/raw SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Found 1 items -rw-r--r-- 3 root supergroup 269 2025-07-08 16:03 /weblog/raw/access.log [root@node weblog-mapreduce]# hdfs dfs -ls /weblog/output SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. ls: /weblog/output': No such file or directory [root@node weblog-mapreduce]# hadoop jar logclean.jar LogCleanDriver /weblog/raw /weblog/output SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [root@node weblog-mapreduce]# [root@node weblog-mapreduce]# mapred job -status job_1751961655287_0001 SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Job: job_1751961655287_0001 Job File: hdfs://node:9000/tmp/hadoop-yarn/staging/history/done/2025/07/08/000000/job_1751961655287_0001_conf.xml Job Tracking URL : http://node:19888/jobhistory/job/job_1751961655287_0001 Uber job : false Number of maps: 1 Number of reduces: 1 map() completion: 1.0 reduce() completion: 1.0 Job state: SUCCEEDED retired: false reason for failure: Counters: 54 File System Counters FILE: Number of bytes read=287 FILE: Number of bytes written=552699 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=372 HDFS: Number of bytes written=269 HDFS: Number of read operations=8 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 HDFS: Number of bytes read erasure-coded=0 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=1848 Total time spent by all reduces in occupied slots (ms)=2016 Total time spent by all map tasks (ms)=1848 Total time spent by all reduce tasks (ms)=2016 Total vcore-milliseconds taken by all map tasks=1848 Total vcore-milliseconds taken by all reduce tasks=2016 Total megabyte-milliseconds taken by all map tasks=1892352 Total megabyte-milliseconds taken by all reduce tasks=2064384 Map-Reduce Framework Map input records=6 Map output records=6 Map output bytes=269 Map output materialized bytes=287 Input split bytes=103 Combine input records=0 Combine output records=0 Reduce input groups=6 Reduce shuffle bytes=287 Reduce input records=6 Reduce output records=6 Spilled Records=12 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=95 CPU time spent (ms)=1050 Physical memory (bytes) snapshot=500764672 Virtual memory (bytes) snapshot=5614292992 Total committed heap usage (bytes)=379584512 Peak Map Physical memory (bytes)=293011456 Peak Map Virtual memory (bytes)=2803433472 Peak Reduce Physical memory (bytes)=207753216 Peak Reduce Virtual memory (bytes)=2810859520 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=269 File Output Format Counters Bytes Written=269 [root@node weblog-mapreduce]# hdfs dfs -ls /weblog/output SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Found 2 items -rw-r--r-- 3 root supergroup 0 2025-07-08 16:34 /weblog/output/_SUCCESS -rw-r--r-- 3 root supergroup 269 2025-07-08 16:34 /weblog/output/part-r-00000 [root@node weblog-mapreduce]# hdfs dfs -cat /weblog/output/part-r-00000 | head -5 SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. 192.168.1.1,2023-06-01 10:30:22,/index.html 192.168.1.1,2023-06-01 10:32:45,/cart.html 192.168.1.2,2023-06-01 10:31:15,/product.html 192.168.1.2,2023-06-01 14:20:18,/product.htm 192.168.1.3,2023-06-01 11:45:30,/checkout.html [root@node weblog-mapreduce]# hive SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Hive Session ID = 5199f37c-a381-428a-be1b-0a2afaab8583 Logging initialized using configuration in jar:file:/home/hive-3.1.3/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = f38c99b3-ff7c-4f61-ae07-6b21d86d7160 hive> CREATE EXTERNAL TABLE weblog ( > ip STRING, > access_time TIMESTAMP, > page STRING > ) > ROW FORMAT DELIMITED > FIELDS TERMINATED BY ',' > LOCATION '/weblog/output'; OK Time taken: 1.274 seconds hive> select * from weblog; OK 192.168.1.1 2023-06-01 10:30:22 /index.html 192.168.1.1 2023-06-01 10:32:45 /cart.html 192.168.1.2 2023-06-01 10:31:15 /product.html 192.168.1.2 2023-06-01 14:20:18 /product.htm 192.168.1.3 2023-06-01 11:45:30 /checkout.html 192.168.1.4 2023-06-01 12:10:05 /index.html Time taken: 1.947 seconds, Fetched: 6 row(s) hive> select * from weblog limit 5; OK 192.168.1.1 2023-06-01 10:30:22 /index.html 192.168.1.1 2023-06-01 10:32:45 /cart.html 192.168.1.2 2023-06-01 10:31:15 /product.html 192.168.1.2 2023-06-01 14:20:18 /product.htm 192.168.1.3 2023-06-01 11:45:30 /checkout.html Time taken: 0.148 seconds, Fetched: 5 row(s) hive> hive> CREATE TABLE page_visits AS > SELECT > page, > COUNT(*) AS visits > FROM weblog > GROUP BY page > ORDER BY visits DESC; Query ID = root_20250708183002_ec44d1b4-af24-403c-bb67-380dfb6961c3 Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1751961655287_0002, Tracking URL = http://node:8088/proxy/application_1751961655287_0002/ Kill Command = /home/hadoop/hadoop3.3/bin/mapred job -kill job_1751961655287_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2025-07-08 18:30:12,692 Stage-1 map = 0%, reduce = 0% 2025-07-08 18:30:16,978 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.8 sec 2025-07-08 18:30:23,184 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.66 sec MapReduce Total cumulative CPU time: 3 seconds 660 msec Ended Job = job_1751961655287_0002 Launching Job 2 out of 2 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1751961655287_0003, Tracking URL = http://node:8088/proxy/application_1751961655287_0003/ Kill Command = /home/hadoop/hadoop3.3/bin/mapred job -kill job_1751961655287_0003 Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1 2025-07-08 18:30:35,969 Stage-2 map = 0%, reduce = 0% 2025-07-08 18:30:41,155 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.23 sec 2025-07-08 18:30:46,313 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.95 sec MapReduce Total cumulative CPU time: 2 seconds 950 msec Ended Job = job_1751961655287_0003 Moving data to directory hdfs://node:9000/hive/warehouse/page_visits MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.66 sec HDFS Read: 12379 HDFS Write: 251 SUCCESS Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 2.95 sec HDFS Read: 7308 HDFS Write: 150 SUCCESS Total MapReduce CPU Time Spent: 6 seconds 610 msec OK Time taken: 46.853 seconds hive> hive> describe page_visits; OK page string visits bigint Time taken: 0.214 seconds, Fetched: 2 row(s) hive> CREATE TABLE ip_visits AS > SELECT > ip, > COUNT(*) AS visits > FROM weblog > GROUP BY ip > ORDER BY visits DESC; Query ID = root_20250708183554_da402d08-af34-46f9-a33a-3f66ddd1a580 Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1751961655287_0004, Tracking URL = http://node:8088/proxy/application_1751961655287_0004/ Kill Command = /home/hadoop/hadoop3.3/bin/mapred job -kill job_1751961655287_0004 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2025-07-08 18:36:04,037 Stage-1 map = 0%, reduce = 0% 2025-07-08 18:36:09,250 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.57 sec 2025-07-08 18:36:14,393 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.3 sec MapReduce Total cumulative CPU time: 3 seconds 300 msec Ended Job = job_1751961655287_0004 Launching Job 2 out of 2 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1751961655287_0005, Tracking URL = http://node:8088/proxy/application_1751961655287_0005/ Kill Command = /home/hadoop/hadoop3.3/bin/mapred job -kill job_1751961655287_0005 Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1 2025-07-08 18:36:27,073 Stage-2 map = 0%, reduce = 0% 2025-07-08 18:36:31,215 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.25 sec 2025-07-08 18:36:36,853 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.27 sec MapReduce Total cumulative CPU time: 3 seconds 270 msec Ended Job = job_1751961655287_0005 Moving data to directory hdfs://node:9000/hive/warehouse/ip_visits MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.3 sec HDFS Read: 12445 HDFS Write: 216 SUCCESS Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 3.27 sec HDFS Read: 7261 HDFS Write: 129 SUCCESS Total MapReduce CPU Time Spent: 6 seconds 570 msec OK Time taken: 44.523 seconds hive> [root@node weblog-mapreduce]# hive> [root@node weblog-mapreduce]# describe ip_visite; bash: describe: command not found... [root@node weblog-mapreduce]# hive SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Hive Session ID = 57dafc2a-afe2-41a4-8159-00f8d44b5add Logging initialized using configuration in jar:file:/home/hive-3.1.3/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true Hive Session ID = f866eae4-4cb4-4403-b7a2-7a52701c5a74 Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. hive> describe ip_visite; FAILED: SemanticException [Error 10001]: Table not found ip_visite hive> describe ip_visits; OK ip string visits bigint Time taken: 0.464 seconds, Fetched: 2 row(s) hive> SELECT * FROM page_visits; OK /index.html 2 /product.html 1 /product.htm 1 /checkout.html 1 /cart.html 1 Time taken: 2.095 seconds, Fetched: 5 row(s) hive> SELECT * FROM ip_visits; OK 192.168.1.2 2 192.168.1.1 2 192.168.1.4 1 192.168.1.3 1 Time taken: 0.176 seconds, Fetched: 4 row(s) hive> hive> [root@node weblog-mapreduce]# [root@node weblog-mapreduce]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48 Server version: 8.0.42 MySQL Community Server - GPL Copyright (c) 2000, 2025, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> CREATE DATABASE IF NOT EXISTS weblog_db; Query OK, 1 row affected (0.06 sec) mysql> USE weblog_db; Database changed mysql> CREATE TABLE IF NOT EXISTS page_visits ( -> page VARCHAR(255), -> visits BIGINT -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Query OK, 0 rows affected, 1 warning (0.05 sec) mysql> SHOW TABLES; +---------------------+ | Tables_in_weblog_db | +---------------------+ | page_visits | +---------------------+ 1 row in set (0.00 sec) mysql> DESCRIBE page_visits; +--------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+--------------+------+-----+---------+-------+ | page | varchar(255) | YES | | NULL | | | visits | bigint | YES | | NULL | | +--------+--------------+------+-----+---------+-------+ 2 rows in set (0.00 sec) mysql> CREATE TABLE IF NOT EXISTS ip_visits ( -> ip VARCHAR(15), -> visits BIGINT -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Query OK, 0 rows affected, 1 warning (0.02 sec) mysql> SHOW TABLES; +---------------------+ | Tables_in_weblog_db | +---------------------+ | ip_visits | | page_visits | +---------------------+ 2 rows in set (0.01 sec) mysql> DESC ip_visits; +--------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+-------------+------+-----+---------+-------+ | ip | varchar(15) | YES | | NULL | | | visits | bigint | YES | | NULL | | +--------+-------------+------+-----+---------+-------+ 2 rows in set (0.00 sec) mysql> ^C mysql> [root@node weblog-mapreduce]# hive SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Hive Session ID = f34e6971-71ae-4aa5-aa22-895061f33bdf Logging initialized using configuration in jar:file:/home/hive-3.1.3/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = f7a06e76-e117-4fbb-9ee8-09fdfd002104 hive> DESCRIBE FORMATTED page_visits; OK # col_name data_type comment page string visits bigint # Detailed Table Information Database: default OwnerType: USER Owner: root CreateTime: Tue Jul 08 18:30:47 CST 2025 LastAccessTime: UNKNOWN Retention: 0 Location: hdfs://node:9000/hive/warehouse/page_visits Table Type: MANAGED_TABLE Table Parameters: COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"} bucketing_version 2 numFiles 1 numRows 5 rawDataSize 70 totalSize 75 transient_lastDdlTime 1751970648 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat: org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: serialization.format 1 Time taken: 1.043 seconds, Fetched: 32 row(s) hive> 到这里就不会了 6.2.2sqoop导出格式 6.2.3导出page_visits表 6.2.4导出到ip_visits表 6.3验证导出数据 6.3.1登录MySQL 6.3.2执行查询

int dec_cnk(DEC_CTX * ctx, COM_BITB * bitb, DEC_STAT * stat) { COM_BSR *bs; COM_PIC_HEADER *pic_header; COM_SQH * sqh; #if SVAC_SECURITY_PARAM_SET COM_SEC_PARA_SET* sec_para_set; #endif #if SVAC_AUTHENTICATION COM_AUTH_DATA* auth_data; #endif #if HLS_OPT_PPS COM_PIC_PARA_SET * pps; #endif COM_SH_EXT *shext; COM_CNKH *cnkh; int ret = COM_OK; if (stat) { com_mset(stat, 0, sizeof(DEC_STAT)); } bs = &ctx->bs; sqh = &ctx->info.sqh; #if HLS_OPT_PPS pps = &ctx->info.pps[ctx->info.pps_count]; #endif pic_header = &ctx->info.pic_header; shext = &ctx->info.shext; cnkh = &ctx->info.cnkh; #if SVAC_SECURITY_PARAM_SET sec_para_set = &ctx->info.sec_para_set; #endif #if SVAC_AUTHENTICATION auth_data = &ctx->info.auth_data; #endif /* set error status */ ctx->bs_err = (u8)bitb->err; #if TRACE_RDO_EXCLUDE_I if (pic_header->slice_type != SLICE_I) { #endif COM_TRACE_SET(1); #if TRACE_RDO_EXCLUDE_I } else { COM_TRACE_SET(0); } #endif /* bitstream reader initialization */ com_bsr_init(bs, bitb->addr, bitb->ssize, NULL); SET_SBAC_DEC(bs, &ctx->sbac_dec); #if SVAC_NAL if (bs->cur[3] == SVAC_SPS) #else if (bs->cur[3] == 0xB0) #endif { #if LIB_PIC_MIXBIN int need_update = COM_CT_CRR_SLICE == cnkh->ctype || COM_CT_CRR_SLICE_IMCOPLETE == cnkh->ctype #if SVAC_SECURITY_PARAM_SET || cnkh->ctype == COM_CT_SEC_PARA_SET #endif ; #endif #if HLS_OPT_PPS ctx->info.pps_count = 0; memset(ctx->info.pps, 0, sizeof(ctx->info.pps)); #endif cnkh->ctype = COM_CT_SQH; ret = dec_eco_sqh(bs, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); #if LIBVC_ON ctx->dpm.libvc_data->is_libpic_processing = sqh->library_stream_flag; ctx->dpm.libvc_data->library_picture_enable_flag = sqh->library_picture_enable_flag; #if LIBPIC_DISPLAY ctx->dpm.libvc_data->libpic_mode_index = sqh->library_picture_mode_index; #endif #endif #if EXTENSION_USER_DATA extension_and_user_data(ctx, bs, 0, sqh, pic_header); #endif #if LIB_PIC_MIXBIN if (sqh->library_stream_flag) { if (!ctx->libpic_init_flag) { ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); #if MULTI_LAYER_FRAMEWORK g_DOIPrev[ctx->layer_id] = g_CountDOICyCleTime[ctx->layer_id] = 0; #else g_DOIPrev = g_CountDOICyCleTime = 0; #endif ctx->libpic_init_flag = 1; ctx->init_flag = 1; } } else #endif if( !ctx->init_flag ) { ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); #if MULTI_LAYER_FRAMEWORK g_DOIPrev[ctx->layer_id] = g_CountDOICyCleTime[ctx->layer_id] = 0; #else g_DOIPrev = g_CountDOICyCleTime = 0; #endif ctx->init_flag = 1; } #if LIB_PIC_MIXBIN if (sqh->library_stream_flag && sqh->library_picture_mixbin_flag) { memcpy(&ctx->info.libpic_sqh, sqh, sizeof(COM_SQH)); ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); } else { memcpy(&ctx->info.normal_sqh, sqh, sizeof(COM_SQH)); if (need_update) { ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); } } #endif } #if !SVAC_NAL else if( bs->cur[3] == 0xB1 ) { ctx->init_flag = 0; cnkh->ctype = COM_CT_SEQ_END; } #endif #if HLS_OPT_PPS else if (bs->cur[3] == SVAC_PPS) { cnkh->ctype = COM_CT_PPS; ret = dec_eco_pps(bs, sqh, pps); ctx->info.pps_count++; assert(ctx->info.pps_count <= MAX_PPS_NUM); com_assert_rv(COM_SUCCEEDED(ret), ret); #if LIB_PIC_MIXBIN if (sqh->library_stream_flag) ctx->info.libpic_pps_idx = ctx->info.pps_count - 1; else ctx->info.normal_pps_idx = ctx->info.pps_count - 1; #endif } #endif #if SVAC_NAL #if HLS_OPT_PPS else if (bs->cur[3] == SVAC_PH) #else else if (bs->cur[3] == SVAC_PPS) #endif #else else if (bs->cur[3] == 0xB3 || bs->cur[3] == 0xB6) #endif { #if MULTI_LAYER_FRAMEWORK if (ctx->layer_id) { if (!ctx->init_flag) { ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); g_DOIPrev[ctx->layer_id] = g_CountDOICyCleTime[ctx->layer_id] = 0; ctx->init_flag = 1; if (ctx->layer_id && !sqh->sps_independent_layer_flag[ctx->layer_id]) { COM_PM* pm = &(ctx->dpm); int size; pm->pic_tmp[0] = com_pic_alloc(&pm->pa, &ret); pm->pic_tmp[1] = com_pic_alloc(&pm->pa, &ret); size = sizeof(s8) * ctx->info.f_scu * REFP_NUM; memset(pm->pic_tmp[0]->map_refi, -1, size); size = sizeof(s16) * ctx->info.f_scu * REFP_NUM * MV_D; memset(pm->pic_tmp[0]->map_mv, 0, size); #if CU_LEVEL_PRIVACY size = sizeof(u8) * ctx->info.f_scu; memset(pm->pic_tmp[0]->map_privacy, 0, size); #endif } } } #endif #if LIB_PIC_MIXBIN if (COM_CT_CRR_SLICE == cnkh->ctype || COM_CT_CRR_SLICE_IMCOPLETE == cnkh->ctype) { assert(sqh->library_picture_mixbin_flag == 1); memcpy(sqh, &ctx->info.normal_sqh, sizeof(COM_SQH)); ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); #if HLS_OPT_PPS pps = &ctx->info.pps[ctx->info.normal_pps_idx]; #endif ctx->dpm.libvc_data->is_libpic_processing = sqh->library_stream_flag; ctx->dpm.libvc_data->library_picture_enable_flag = sqh->library_picture_enable_flag; } #endif cnkh->ctype = COM_CT_PICTURE; /* decode slice header */ pic_header->low_delay = sqh->low_delay; int need_minus_256 = 0; #if HLS_OPT_PPS ret = dec_eco_pic_header(bs, ctx, &need_minus_256); #if MULTI_LAYER_FRAMEWORK assert(ctx->layer_id == pic_header->layer_id); if (ctx->layer_id && !sqh->sps_independent_layer_flag[ctx->layer_id] ) { DEC_CTX* ctx_b = (DEC_CTX*)ctx->ctx_b; upsample_base_pic(&ctx->dpm, ctx_b->pic, &ctx_b->info, &ctx->info, ctx_b->layer_id, ctx->layer_id); add_pic(&ctx_b->dpm, &ctx->dpm, ctx_b->pic, ctx_b->layer_id, &ctx_b->info, &ctx->info, ctx_b->info.pic_header.decode_order_index, ctx_b->ptr, ctx->refp, ctx->info.sqh.ref_layer_id[ctx->layer_id], ctx_b->info.poc); } #endif #else ret = dec_eco_pic_header(bs, pic_header, sqh, &need_minus_256); #endif if (need_minus_256) { com_picman_dpbpic_doi_minus_cycle_length( &ctx->dpm ); } #if HLS_OPT_PPS ctx->wq[0] = ctx->info.pps[pic_header->pic_pps_id].wq_4x4_matrix; ctx->wq[1] = ctx->info.pps[pic_header->pic_pps_id].wq_8x8_matrix; #else ctx->wq[0] = pic_header->wq_4x4_matrix; ctx->wq[1] = pic_header->wq_8x8_matrix; #endif if (!sqh->library_stream_flag) { com_picman_check_repeat_doi(&ctx->dpm, pic_header); } #if LIB_PIC_MIXBIN if (sqh->library_stream_flag && sqh->library_picture_mixbin_flag) { memcpy(&ctx->info.libpic_pic_header, pic_header, sizeof(COM_PIC_HEADER)); memcpy(ctx->libpic_pic_esao_params, ctx->info.pic_header.pic_esao_params, N_C * sizeof(ESAO_BLK_PARAM)); memcpy(ctx->libpic_pic_ccsao_params, ctx->info.pic_header.pic_ccsao_params, (N_C - 1) * sizeof(CCSAO_BLK_PARAM)); for (int comp_idx = 0; comp_idx < N_C; comp_idx++) { #if ALF_SHAPE int num_coef = (ctx->info.sqh.adaptive_leveling_filter_enhance_flag) ? ALF_MAX_NUM_COEF_SHAPE2 : ALF_MAX_NUM_COEF; #endif copy_alf_param(ctx->dec_alf->libpic_alf_picture_param[comp_idx], ctx->dec_alf->alf_picture_param[comp_idx] #if ALF_SHAPE , num_coef #if ALF_SHIFT + (int)ctx->info.sqh.adaptive_leveling_filter_enhance_flag #endif #endif ); } } #endif #if LIBPIC_DISPLAY ctx->dpm.libvc_data->libpic_index = pic_header->library_picture_index; #endif #if HIGH_LEVEL_PRIVACY memset(ctx->ctx_privacy_data.region_max_num, 0, sizeof(int) * 10); #endif #if EXTENSION_USER_DATA && WRITE_MD5_IN_USER_DATA extension_and_user_data(ctx, bs, 1, sqh, pic_header); #endif com_constrcut_ref_list_doi(pic_header); //add by Yuqun Fan, init rpl list at ph instead of sh #if HLS_RPL #if LIBVC_ON if (!sqh->library_stream_flag) #endif { ret = com_picman_refpic_marking_decoder(&ctx->dpm, pic_header); com_assert_rv(ret == COM_OK, ret); } com_cleanup_useless_pic_buffer_in_pm(&ctx->dpm); /* reference picture lists construction */ ret = com_picman_refp_rpl_based_init_decoder(&ctx->dpm, pic_header, ctx->refp); #if AWP if (ctx->info.pic_header.slice_type == SLICE_P || ctx->info.pic_header.slice_type == SLICE_B) { for (int i = 0; i < ctx->dpm.num_refp[REFP_0]; i++) { ctx->info.pic_header.ph_poc[REFP_0][i] = ctx->refp[i][REFP_0].ptr; } } if (ctx->info.pic_header.slice_type == SLICE_B) { for (int i = 0; i < ctx->dpm.num_refp[REFP_1]; i++) { ctx->info.pic_header.ph_poc[REFP_1][i] = ctx->refp[i][REFP_1].ptr; } } #endif #endif com_assert_rv(COM_SUCCEEDED(ret), ret); } #if SVAC_NAL else if ((bs->cur[3] == SVAC_IDR || bs->cur[3] == SVAC_NON_RAP || bs->cur[3] == SVAC_RAP_I #if LIB_PIC_MIXBIN || bs->cur[3] == SVAC_CRR_L || bs->cur[3] == SVAC_CRR_RL #if DISPLAY_L_NAL_TYPE || bs->cur[3] == SVAC_CRR_DP #endif #if LIB_PIC_ERR_TOL || bs->cur[3] == SVAC_CRR_DL #endif #endif ) && bs->cur[4] <= 0x8E) #else else if (bs->cur[3] >= 0x00 && bs->cur[3] <= 0x8E) #endif { #if LIB_PIC_MIXBIN #if DISPLAY_L_NAL_TYPE if (!sqh->library_stream_flag && (bs->cur[3] == SVAC_CRR_L || bs->cur[3] == SVAC_CRR_DP #if LIB_PIC_ERR_TOL || bs->cur[3] == SVAC_CRR_DL #endif )) #else if (!sqh->library_stream_flag && bs->cur[3] == SVAC_CRR_L) #endif { assert(sqh->library_picture_mixbin_flag == 1); memcpy(sqh, &ctx->info.libpic_sqh, sizeof(COM_SQH)); ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); #if HLS_OPT_PPS pps = &ctx->info.pps[ctx->info.libpic_pps_idx]; #endif ctx->dpm.libvc_data->is_libpic_processing = sqh->library_stream_flag; ctx->dpm.libvc_data->library_picture_enable_flag = sqh->library_picture_enable_flag; memcpy(pic_header, &ctx->info.libpic_pic_header, sizeof(COM_PIC_HEADER)); memcpy(pic_header->pic_esao_params, ctx->libpic_pic_esao_params, N_C * sizeof(ESAO_BLK_PARAM)); memcpy(pic_header->pic_ccsao_params, ctx->libpic_pic_ccsao_params, (N_C - 1) * sizeof(CCSAO_BLK_PARAM)); memcpy(ctx->pic_alf_on, ctx->libpic_pic_alf_on, N_C * sizeof(int)); for (int comp_idx = 0; comp_idx < N_C; comp_idx++) { #if ALF_SHAPE int num_coef = (ctx->info.sqh.adaptive_leveling_filter_enhance_flag) ? ALF_MAX_NUM_COEF_SHAPE2 : ALF_MAX_NUM_COEF; #endif copy_alf_param(ctx->dec_alf->alf_picture_param[comp_idx], ctx->dec_alf->libpic_alf_picture_param[comp_idx] #if ALF_SHAPE , num_coef #if ALF_SHIFT + (int)ctx->info.sqh.adaptive_leveling_filter_enhance_flag #endif #endif ); memcpy(ctx->info.pic_header.pic_esao_params[comp_idx].lcu_flag, ctx->libpic_esao_lcu_flag[comp_idx], ctx->info.f_lcu * sizeof(int)); if (comp_idx) memcpy(ctx->info.pic_header.pic_ccsao_params[comp_idx - 1].lcu_flag, ctx->libpic_ccsao_lcu_flag[comp_idx - 1], ctx->info.f_lcu * sizeof(int)); } for (int lcu_idx = 0; lcu_idx < ctx->info.f_lcu; lcu_idx++) { copy_sao_param_for_blk(ctx->sao_blk_params[lcu_idx], ctx->libpic_sao_blk_params[lcu_idx]); copy_sao_param_for_blk(ctx->rec_sao_blk_params[lcu_idx], ctx->libpic_rec_sao_blk_params[lcu_idx]); memcpy(ctx->dec_alf->alf_lcu_enabled[lcu_idx], ctx->dec_alf->libpic_alf_lcu_enabled[lcu_idx], N_C * sizeof(int)); } memcpy(ctx->map.map_pb_tb_part, ctx->map.libpic_map_pb_tb_part, ctx->info.f_scu * sizeof(u32)); memcpy(ctx->map.map_patch_idx, ctx->map.libpic_map_patch_idx, ctx->info.f_scu * sizeof(s8)); memcpy(ctx->map.map_split, ctx->map.libpic_map_split, ctx->info.f_lcu * sizeof(s8)* MAX_CU_DEPTH* NUM_BLOCK_SHAPE* MAX_CU_CNT_IN_LCU); mCabac_ws = MCABAC_SHIFT_I; mCabac_offset = (1 << (mCabac_ws - 1)); counter_thr1 = 0; counter_thr2 = COUNTER_THR_I; #if HLS_OPT_PPS ctx->wq[0] = ctx->info.pps[pic_header->pic_pps_id].wq_4x4_matrix; ctx->wq[1] = ctx->info.pps[pic_header->pic_pps_id].wq_8x8_matrix; #else ctx->wq[0] = pic_header->wq_4x4_matrix; ctx->wq[1] = pic_header->wq_8x8_matrix; #endif } #if DISPLAY_L_NAL_TYPE if (sqh->library_stream_flag && (bs->cur[3] != SVAC_CRR_L && bs->cur[3] != SVAC_CRR_DP #if LIB_PIC_ERR_TOL && bs->cur[3] != SVAC_CRR_DL #endif )) #else if (sqh->library_stream_flag && bs->cur[3] != SVAC_CRR_L) #endif { assert(sqh->library_picture_mixbin_flag == 1); memcpy(sqh, &ctx->info.normal_sqh, sizeof(COM_SQH)); ret = sequence_init(ctx, sqh); com_assert_rv(COM_SUCCEEDED(ret), ret); #if HLS_OPT_PPS pps = &ctx->info.pps[ctx->info.normal_pps_idx ]; #endif ctx->dpm.libvc_data->is_libpic_processing = sqh->library_stream_flag; ctx->dpm.libvc_data->library_picture_enable_flag = sqh->library_picture_enable_flag; } #if LIBPIC_DISPLAY int is_patch_l = sqh->library_stream_flag && sqh->library_picture_mixbin_flag && ctx->info.sqh.library_picture_mode_index != 1; #else int is_patch_l = sqh->library_stream_flag && sqh->library_picture_mixbin_flag; #endif #endif cnkh->ctype = COM_CT_SLICE; #if SVAC_NAL if (bs->cur[3] == SVAC_IDR) picman_reset_dpb(&ctx->dpm); #endif #if CU_LEVEL_PRIVACY COM_BSR *bs_privacy = &ctx->bs_privacy; com_bsr_init(bs_privacy, bitb->addr2, bitb->ssize2, NULL); SET_SBAC_DEC(bs_privacy, &ctx->sbac_dec_privacy); if (ctx->user_permission && ctx->info.pic_header.ph_privacy_on) { while (com_bsr_next(bs_privacy, 24) != 0x1) { ret = com_bsr_read(bs_privacy, 8); }; unsigned int nalu_type = 0, temporal_id = 0, layer_id = 0; dec_eco_nalu_header(bs_privacy, &nalu_type, &temporal_id, &layer_id); assert(nalu_type == SVAC_PRIVACY); #if TSVC_OPT assert(temporal_id == ctx->info.pic_header.temporal_id); #endif #if MULTI_LAYER_FRAMEWORK assert(layer_id == ctx->info.pic_header.layer_id); #endif } #endif #if HLS_OPT_PPS ret = dec_eco_patch_header(bs, sqh, &ctx->info.pps[pic_header->pic_pps_id], pic_header, shext, ctx->patch); #else ret = dec_eco_patch_header(bs, sqh, pic_header, shext, ctx->patch); #endif #if LIB_PIC_MIXBIN if (is_patch_l && ctx->patch->idx + 1 < ctx->patch->rows * ctx->patch->columns) cnkh->ctype = COM_CT_CRR_SLICE_IMCOPLETE; #if LIBPIC_DISPLAY else if (is_patch_l || (sqh->library_stream_flag && ctx->info.sqh.library_picture_mode_index == 1)) #else else if (is_patch_l) #endif cnkh->ctype = COM_CT_CRR_SLICE; #endif /* initialize slice */ ret = slice_init(ctx, ctx->core, pic_header); com_assert_rv(COM_SUCCEEDED(ret), ret); #if LIB_PIC_MIXBIN if (is_patch_l && ctx->patch->idx != 0) { ctx->pic = ctx->libpic_pic; } else { #endif /* get available frame buffer for decoded image */ ctx->pic = com_picman_get_empty_pic(&ctx->dpm, &ret); com_assert_rv(ctx->pic, ret); #if LIB_PIC_MIXBIN if (is_patch_l && ctx->patch->idx == 0) ctx->libpic_pic = ctx->pic; } #endif /* get available frame buffer for decoded image */ ctx->map.map_refi = ctx->pic->map_refi; ctx->map.map_mv = ctx->pic->map_mv; #if CU_LEVEL_PRIVACY ctx->map.map_privacy = ctx->pic->map_privacy; #endif #if CU_LEVEL_PRIVACY com_mset_x64a(ctx->map.map_privacy_pic_filter[0], 0, sizeof(COM_FILTER_SKIP)* ctx->info.pic_width* ctx->info.pic_height); com_mset_x64a(ctx->map.map_privacy_pic_filter[1], 0, sizeof(COM_FILTER_SKIP)* ctx->info.pic_width* ctx->info.pic_height); #endif /* decode slice layer */ #if HLS_OPT_PPS ret = dec_pic(ctx, ctx->core, sqh, &ctx->info.pps[pic_header->pic_pps_id], pic_header, shext); #else ret = dec_pic(ctx, ctx->core, sqh, pic_header, shext); #endif com_assert_rv(COM_SUCCEEDED(ret), ret); #if LIB_PIC_MIXBIN if (!is_patch_l || (is_patch_l && ctx->patch->idx + 1 == ctx->patch->rows * ctx->patch->columns)) { #endif /* deblocking filter */ #if HLS_OPT_PPS if (ctx->info.pps[ctx->info.pic_header.pic_pps_id].loop_filter_disable_flag == 0) #else if (ctx->info.pic_header.loop_filter_disable_flag == 0) #endif { ret = dec_deblock_avs2(ctx); com_assert_rv(COM_SUCCEEDED(ret), ret); } #if CCSAO if (ctx->info.pic_header.pic_ccsao_on[U_C-1] || ctx->info.pic_header.pic_ccsao_on[V_C-1]) { #if CCSAO_ENHANCEMENT copy_frame_for_ccsao(ctx->pic_ccsao[0], ctx->pic, Y_C); copy_frame_for_ccsao(ctx->pic_ccsao[0], ctx->pic, U_C); copy_frame_for_ccsao(ctx->pic_ccsao[0], ctx->pic, V_C); #else copy_frame_for_ccsao(ctx->pic_ccsao, ctx->pic, Y_C); #endif } #endif /* sao filter */ if (ctx->info.sqh.sample_adaptive_offset_enable_flag) { ret = dec_sao_avs2(ctx); com_assert_rv(ret == COM_OK, ret); } /* esao filter */ #if ESAO if (ctx->info.sqh.esao_enable_flag) { ret = dec_esao(ctx); com_assert_rv(ret == COM_OK, ret); } #endif #if CCSAO /* ccsao filter */ if (ctx->info.sqh.ccsao_enable_flag) { ret = dec_ccsao(ctx); com_assert_rv(ret == COM_OK, ret); } #endif /* ALF */ if (ctx->info.sqh.adaptive_leveling_filter_enable_flag) { ret = dec_alf_avs2(ctx, ctx->pic); com_assert_rv(COM_SUCCEEDED(ret), ret); } /* MD5 check for testing encoder-decoder match*/ if (ctx->use_pic_sign && ctx->pic_sign_exist) { ret = dec_picbuf_check_signature(ctx->pic, ctx->pic_sign #if CU_LEVEL_PRIVACY , ctx->user_permission || !ctx->info.pic_header.ph_privacy_on #endif ); com_assert_rv(COM_SUCCEEDED(ret), ret); ctx->pic_sign_exist = 0; /* reset flag */ } #if SVAC_UD_MD5_STREAM extension_and_user_data(ctx, bs, 1, sqh, pic_header); if (ctx->use_pic_sign && ctx->stream_sign_exist) { ctx->stream_sign_check_flag = 1; unsigned char * concat_buf = malloc((1024 * 1024 * 32)); com_assert_rv(concat_buf != NULL, -1); unsigned int stream_size = (unsigned int)((u8 *)bitb->addr3 - (u8 *)bitb->addr3_beg); u8 * stream_p = bitb->addr3_beg; unsigned char * buffer_p = concat_buf; while ((stream_p - (u8 *)bitb->addr3_beg) < stream_size) { unsigned int nal_size = 1; while (!(stream_p[nal_size + 0] == 0x00 && stream_p[nal_size + 1] == 0x00 && stream_p[nal_size + 2] == 0x00 && stream_p[nal_size + 3] == 0x01) && !(stream_p[nal_size + 0] == 0x00 && stream_p[nal_size + 1] == 0x00 && stream_p[nal_size + 2] == 0x01 && stream_p[nal_size - 1] != 0)) { if (!((stream_p - (u8 *)bitb->addr3_beg) < stream_size - nal_size - 4)) { nal_size += 4; break; } nal_size++; } if (stream_p[4] != (0x0c) && stream_p[3] != (0x0c) //sei && stream_p[4] != (0x20) && stream_p[3] != (0x20) //eocvs && stream_p[4] != (0x16) && stream_p[3] != (0x16) //eos #if SVAC_SECURITY_PARAM_SET && stream_p[4] != (0x52) && stream_p[3] != (0x52) //sec #endif #if SVAC_AUTHENTICATION && stream_p[4] != (0x14) && stream_p[3] != (0x14) //auth #endif ) { int start_code_len = 4; if (stream_p[0] == 0x00 && stream_p[1] == 0x00 && stream_p[2] == 0x01) { start_code_len = 3; } unsigned int raw_nal_size = nal_size - start_code_len; memcpy(buffer_p, stream_p + start_code_len, raw_nal_size); buffer_p += raw_nal_size; } stream_p += nal_size; } int stream_total_size = (int)(buffer_p - concat_buf); u8 stream_sign[16]; int ret = com_md5_stream(concat_buf, stream_total_size, stream_sign); com_assert_rv(COM_SUCCEEDED(ret), ret); if (com_mcmp(ctx->stream_sign, stream_sign, 16) != 0) { printf("\n stream signature check failed \n"); } com_assert_rv(com_mcmp(ctx->stream_sign, stream_sign, 16) == 0, COM_ERR_BAD_CRC); ctx->stream_sign_exist = 0; /* reset flag */ if (concat_buf) free(concat_buf); } bitb->addr3 = bitb->addr3_beg; #endif #if PIC_PAD_SIZE_L > 0 /* expand pixels to padding area */ dec_picbuf_expand(ctx, ctx->pic); #endif #if LIB_PIC_MIXBIN } if (is_patch_l) { memcpy(ctx->libpic_pic_alf_on, ctx->pic_alf_on, N_C * sizeof(int)); for (int lcu_idx = 0; lcu_idx < ctx->info.f_lcu; lcu_idx++) { copy_sao_param_for_blk(ctx->libpic_sao_blk_params[lcu_idx], ctx->sao_blk_params[lcu_idx]); copy_sao_param_for_blk(ctx->libpic_rec_sao_blk_params[lcu_idx], ctx->rec_sao_blk_params[lcu_idx]); memcpy(ctx->dec_alf->libpic_alf_lcu_enabled[lcu_idx], ctx->dec_alf->alf_lcu_enabled[lcu_idx], N_C * sizeof(int)); } for (int comp_idx = 0; comp_idx < N_C; comp_idx++) { memcpy(ctx->libpic_esao_lcu_flag[comp_idx], ctx->info.pic_header.pic_esao_params[comp_idx].lcu_flag, ctx->info.f_lcu * sizeof(int)); if (comp_idx) memcpy(ctx->libpic_ccsao_lcu_flag[comp_idx - 1], ctx->info.pic_header.pic_ccsao_params[comp_idx - 1].lcu_flag, ctx->info.f_lcu * sizeof(int)); } } #endif /* put decoded picture to DPB */ #if LIBVC_ON if (sqh->library_stream_flag #if LIB_PIC_MIXBIN && (!is_patch_l || (is_patch_l && (ctx->patch->idx + 1) == ctx->patch->rows * ctx->patch->columns)) #endif ) { ret = com_picman_put_libpic(&ctx->dpm, ctx->pic, ctx->info.pic_header.slice_type, ctx->ptr, pic_header->decode_order_index, ctx->info.pic_header.temporal_id, 1, ctx->refp, pic_header #if HLS_OPT_PPS , pps #endif ); } else #if LIB_PIC_MIXBIN if (!sqh->library_stream_flag) #endif #endif { ret = com_picman_put_pic(&ctx->dpm, ctx->pic, ctx->info.pic_header.slice_type, ctx->ptr, pic_header->decode_order_index , pic_header->picture_output_delay, ctx->info.pic_header.temporal_id, 1, ctx->refp #if OBMC #if CUDQP , pic_header #else , pic_header->picture_qp #endif #if HLS_OPT_PPS , pps #endif #endif ); #if LIBVC_ON assert((&ctx->dpm)->cur_pb_size + (&ctx->dpm)->cur_libpb_size <= sqh->max_dpb_size); #else assert((&ctx->dpm)->cur_pb_size <= sqh->max_dpb_size); #endif } com_assert_rv(COM_SUCCEEDED(ret), ret); } #if SVAC_NAL else if (bs->cur[3] == SVAC_EOS) { ctx->init_flag = 0; ctx->libpic_init_flag = 0; cnkh->ctype = COM_CT_SEQ_END; } else if (bs->cur[3] == SVAC_EOCVS) { ctx->init_flag = 0; ctx->libpic_init_flag = 0; cnkh->ctype = COM_CT_CVS_END; } #endif #if SVAC_SECURITY_PARAM_SET else if (bs->cur[3] == SVAC_SEC_PS) { dec_eco_sec_parameter_set_init(ctx, bs, pic_header, sec_para_set); ret = dec_eco_sec_parameter_set(ctx, bs, pic_header, sec_para_set); com_assert_rv(COM_SUCCEEDED(ret), ret); cnkh->ctype = COM_CT_SEC_PARA_SET; } #endif #if SVAC_AUTHENTICATION else if (bs->cur[3] == SVAC_AUTH_DATA) { dec_eco_auth_data_init(ctx, bs, pic_header, auth_data); ret = dec_eco_auth_data(ctx, bs, pic_header, auth_data); com_assert_rv(COM_SUCCEEDED(ret), ret); cnkh->ctype = COM_CT_AUTH; } #endif else { return COM_ERR_MALFORMED_BITSTREAM; } make_stat(ctx, cnkh->ctype, stat); return ret; }

最新推荐

recommend-type

新能源车电机控制器:基于TI芯片的FOC算法源代码与实际应用

内容概要:本文详细介绍了基于TI芯片的FOC(场向量控制)算法在新能源车电机控制器中的应用。文章首先阐述了新能源车电机控制器的重要性及其对车辆性能的影响,接着深入探讨了FOC算法的工作原理,强调其在提高电机控制精度和能效方面的优势。随后,文章展示了完整的源代码资料,涵盖采样模块、CAN通信模块等多个关键部分,并指出这些代码不仅限于理论演示,而是来自实际量产的应用程序。此外,文中还特别提到代码遵循严格的规范,有助于读者理解和学习电机控制软件的最佳实践。 适合人群:从事新能源车研发的技术人员、电机控制工程师、嵌入式系统开发者以及对电机控制感兴趣的电子工程学生。 使用场景及目标:① 学习并掌握基于TI芯片的FOC算法的具体实现;② 理解电机控制器各模块的功能和交互方式;③ 提升实际项目开发能力,减少开发过程中遇到的问题。 其他说明:本文提供的源代码资料来源于早期已量产的新能源车控制器,因此具有较高的实用价值和参考意义。
recommend-type

中证500指数成分股历年调整名单2007至2023年 调入调出

中证500指数是中证指数有限公司开发的指数,样本空间内股票由全部A股中剔除沪深300指数成分股及总市值排名前300名的股票后,选取总市值排名靠前的500只股票组成,综合反映中国A股市场中一批中小市值公司的股票价格表现。包含字段:公告日期、变更日期、成份证券代码、成份证券简称、变动方式。各次调整日期:2006-12-26、2007-01-15、2007-06-01、2007-07-02、2007-12-10、2008-01-02、2008-06-04、2008-07-01、2008-12-15、2009-01-05、2009-05-05、2009-05-06、2009-06-15、2009-07-01、2009-08-10、2009-08-10。资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
recommend-type

基于28335的高精度旋变软解码技术及其应用 - 电机控制

内容概要:本文详细介绍了基于28335芯片实现的旋变软解码技术。该技术在0-360°范围内与TI方案相比,偏差极小(平均偏差最大为0.0009弧度),并且响应速度优于AD2S1205(解算器建立时间不超过5ms)。文中还讨论了信号解调方法,利用三角函数积化和差公式将旋变输出信号分解为高低频两部分,并通过锁相环和特殊设计的滤波器提高信号处理的精度和稳定性。最终,该技术在12位AD下能保证10-11位的精度。 适合人群:从事电机控制、自动化系统设计及相关领域的工程师和技术人员。 使用场景及目标:适用于需要高精度、快速响应的旋转变压器解码应用场景,如工业自动化、机器人技术和电动汽车等领域。目标是提供一种替代传统硬件解码方案的技术选择,提升系统的可靠性和性能。 阅读建议:读者可以通过本文深入了解旋变软解码的工作原理和技术细节,掌握其相对于现有解决方案的优势,从而更好地应用于实际项目中。
recommend-type

掌握XFireSpring整合技术:HELLOworld原代码使用教程

标题:“xfirespring整合使用原代码”中提到的“xfirespring”是指将XFire和Spring框架进行整合使用。XFire是一个基于SOAP的Web服务框架,而Spring是一个轻量级的Java/Java EE全功能栈的应用程序框架。在Web服务开发中,将XFire与Spring整合能够发挥两者的优势,例如Spring的依赖注入、事务管理等特性,与XFire的简洁的Web服务开发模型相结合。 描述:“xfirespring整合使用HELLOworld原代码”说明了在这个整合过程中实现了一个非常基本的Web服务示例,即“HELLOworld”。这通常意味着创建了一个能够返回"HELLO world"字符串作为响应的Web服务方法。这个简单的例子用来展示如何设置环境、编写服务类、定义Web服务接口以及部署和测试整合后的应用程序。 标签:“xfirespring”表明文档、代码示例或者讨论集中于XFire和Spring的整合技术。 文件列表中的“index.jsp”通常是一个Web应用程序的入口点,它可能用于提供一个用户界面,通过这个界面调用Web服务或者展示Web服务的调用结果。“WEB-INF”是Java Web应用中的一个特殊目录,它存放了应用服务器加载的Servlet类文件和相关的配置文件,例如web.xml。web.xml文件中定义了Web应用程序的配置信息,如Servlet映射、初始化参数、安全约束等。“META-INF”目录包含了元数据信息,这些信息通常由部署工具使用,用于描述应用的元数据,如manifest文件,它记录了归档文件中的包信息以及相关的依赖关系。 整合XFire和Spring框架,具体知识点可以分为以下几个部分: 1. XFire框架概述 XFire是一个开源的Web服务框架,它是基于SOAP协议的,提供了一种简化的方式来创建、部署和调用Web服务。XFire支持多种数据绑定,包括XML、JSON和Java数据对象等。开发人员可以使用注解或者基于XML的配置来定义服务接口和服务实现。 2. Spring框架概述 Spring是一个全面的企业应用开发框架,它提供了丰富的功能,包括但不限于依赖注入、面向切面编程(AOP)、数据访问/集成、消息传递、事务管理等。Spring的核心特性是依赖注入,通过依赖注入能够将应用程序的组件解耦合,从而提高应用程序的灵活性和可测试性。 3. XFire和Spring整合的目的 整合这两个框架的目的是为了利用各自的优势。XFire可以用来创建Web服务,而Spring可以管理这些Web服务的生命周期,提供企业级服务,如事务管理、安全性、数据访问等。整合后,开发者可以享受Spring的依赖注入、事务管理等企业级功能,同时利用XFire的简洁的Web服务开发模型。 4. XFire与Spring整合的基本步骤 整合的基本步骤可能包括添加必要的依赖到项目中,配置Spring的applicationContext.xml,以包括XFire特定的bean配置。比如,需要配置XFire的ServiceExporter和ServicePublisher beans,使得Spring可以管理XFire的Web服务。同时,需要定义服务接口以及服务实现类,并通过注解或者XML配置将其关联起来。 5. Web服务实现示例:“HELLOworld” 实现一个Web服务通常涉及到定义服务接口和服务实现类。服务接口定义了服务的方法,而服务实现类则提供了这些方法的具体实现。在XFire和Spring整合的上下文中,“HELLOworld”示例可能包含一个接口定义,比如`HelloWorldService`,和一个实现类`HelloWorldServiceImpl`,该类有一个`sayHello`方法返回"HELLO world"字符串。 6. 部署和测试 部署Web服务时,需要将应用程序打包成WAR文件,并部署到支持Servlet 2.3及以上版本的Web应用服务器上。部署后,可以通过客户端或浏览器测试Web服务的功能,例如通过访问XFire提供的服务描述页面(WSDL)来了解如何调用服务。 7. JSP与Web服务交互 如果在应用程序中使用了JSP页面,那么JSP可以用来作为用户与Web服务交互的界面。例如,JSP可以包含JavaScript代码来发送异步的AJAX请求到Web服务,并展示返回的结果给用户。在这个过程中,JSP页面可能使用XMLHttpRequest对象或者现代的Fetch API与Web服务进行通信。 8. 项目配置文件说明 项目配置文件如web.xml和applicationContext.xml分别在Web应用和服务配置中扮演关键角色。web.xml负责定义Web组件,比如Servlet、过滤器和监听器,而applicationContext.xml则负责定义Spring容器中的bean,包括数据源、事务管理器、业务逻辑组件和服务访问器等。 总之,通过上述整合使用原代码的知识点,可以深入理解XFire与Spring框架的结合使用,以及如何开发和部署基本的Web服务。这些技术知识有助于进行更高层次的Web服务开发,以及在复杂的IT环境中灵活运用各种框架和工具。
recommend-type

【Unity2018汉化大揭秘】:一步到位优化中文用户体验

# 摘要 本论文详细介绍了Unity2018汉化项目的实施过程,从理论准备到实践步骤,再到后期的测试与优化,最终分享了汉化经验和心得。首先,强调了汉化项目对于用户体验提升和产品国际化的重要性,并分析了汉化资源的分类、识别与管理方法。接着,详细阐述了汉化工具的选择、脚本编写、资源手动调整等实践步骤,以及汉化质量测试、体验优化和项目维护更新的重要性。论文最后通过案例研究,分析了汉化中的挑战和应对策略,并预测了汉化技术未来的发展方向。整个汉化过程不仅涉及技术层面的操作,还包含了深入的理论研究和实践心得分享,对提升汉化项目的效率和质量具有指导意义。 # 关键字 Unity汉化;用户体验;国际化;资源
recommend-type

iPhone

<think>我们注意到用户意图不明确,但提到了“照片提取”和“其他功能帮助”。因此,我们需要通过搜索来获取关于iPhone照片提取的常见方法以及其他可能的功能帮助。由于用户问题比较宽泛,我们将重点放在照片提取上,因为这是明确提到的关键词。同时,我们也会考虑一些其他常用功能的帮助。首先,针对照片提取,可能涉及从iPhone导出照片、从备份中提取照片、或者从损坏的设备中恢复照片等。我们将搜索这些方面的信息。其次,关于其他功能帮助,我们可以提供一些常见问题的快速指南,如电池优化、屏幕时间管理等。根据要求,我们需要将答案组织为多个方法或步骤,并在每个步骤间换行。同时,避免使用第一人称和步骤词汇。由于
recommend-type

驾校一点通软件:提升驾驶证考试通过率

标题“驾校一点通”指向的是一款专门为学员考取驾驶证提供帮助的软件,该软件强调其辅助性质,旨在为学员提供便捷的学习方式和复习资料。从描述中可以推断出,“驾校一点通”是一个与驾驶考试相关的应用软件,这类软件一般包含驾驶理论学习、模拟考试、交通法规解释等内容。 文件标题中的“2007”这个年份标签很可能意味着软件的最初发布时间或版本更新年份,这说明了软件具有一定的历史背景和可能经过了多次更新,以适应不断变化的驾驶考试要求。 压缩包子文件的文件名称列表中,有以下几个文件类型值得关注: 1. images.dat:这个文件名表明,这是一个包含图像数据的文件,很可能包含了用于软件界面展示的图片,如各种标志、道路场景等图形。在驾照学习软件中,这类图片通常用于帮助用户认识和记忆不同交通标志、信号灯以及驾驶过程中需要注意的各种道路情况。 2. library.dat:这个文件名暗示它是一个包含了大量信息的库文件,可能包含了法规、驾驶知识、考试题库等数据。这类文件是提供给用户学习驾驶理论知识和准备科目一理论考试的重要资源。 3. 驾校一点通小型汽车专用.exe:这是一个可执行文件,是软件的主要安装程序。根据标题推测,这款软件主要是针对小型汽车驾照考试的学员设计的。通常,小型汽车(C1类驾照)需要学习包括车辆构造、基础驾驶技能、安全行车常识、交通法规等内容。 4. 使用说明.html:这个文件是软件使用说明的文档,通常以网页格式存在,用户可以通过浏览器阅读。使用说明应该会详细介绍软件的安装流程、功能介绍、如何使用软件的各种模块以及如何通过软件来帮助自己更好地准备考试。 综合以上信息,我们可以挖掘出以下几个相关知识点: - 软件类型:辅助学习软件,专门针对驾驶考试设计。 - 应用领域:主要用于帮助驾考学员准备理论和实践考试。 - 文件类型:包括图片文件(images.dat)、库文件(library.dat)、可执行文件(.exe)和网页格式的说明文件(.html)。 - 功能内容:可能包含交通法规知识学习、交通标志识别、驾驶理论学习、模拟考试、考试题库练习等功能。 - 版本信息:软件很可能最早发布于2007年,后续可能有多个版本更新。 - 用户群体:主要面向小型汽车驾照考生,即C1类驾照学员。 - 使用方式:用户需要将.exe安装文件进行安装,然后根据.html格式的使用说明来熟悉软件操作,从而利用images.dat和library.dat中的资源来辅助学习。 以上知识点为从给定文件信息中提炼出来的重点,这些内容对于了解“驾校一点通”这款软件的功能、作用、使用方法以及它的发展历史都有重要的指导意义。
recommend-type

【DFLauncher自动化教程】:简化游戏启动流程,让游戏体验更流畅

# 摘要 DFLauncher是一个功能丰富的游戏启动和管理平台,本论文将介绍其安装、基础使用、高级设置、社区互动以及插件开发等方面。通过对配置文件的解析、界面定制、自动化功能的实现、高级配置选项、安全性和性能监控的详细讨论,本文阐述了DFLauncher如何帮助用户更高效地管理和优化游戏环境。此外,本文还探讨了DFLauncher社区的资源分享、教育教程和插件开发等内容,
recommend-type

自适应卡尔曼滤波是什么意思

<think>嗯,用户这次想深入了解自适应卡尔曼滤波(AKF)的概念和原理。从对话历史看,用户之前研究过相机自动对焦系统的降噪技术,其中提到过自适应卡尔曼滤波的应用场景。用户可能是个工程师或研究者,正在探索信号处理在具体场景的实现细节。 用户提供的三篇参考文献很有价值:[1]是基础理论综述,[2]聚焦多传感器场景,[3]讨论噪声协方差自适应方法。需要特别注意相机AF系统的特殊需求——实时性要求高(每秒数十次对焦计算)、噪声环境复杂(机械振动/弱光干扰),这些在解释原理时要结合具体案例。 技术要点需要分层解析:先明确标准卡尔曼滤波的局限(固定噪声参数),再展开自适应机制。对于相机AF场景,重
recommend-type

EIA-CEA 861B标准深入解析:时间与EDID技术

EIA-CEA 861B标准是美国电子工业联盟(Electronic Industries Alliance, EIA)和消费电子协会(Consumer Electronics Association, CEA)联合制定的一个技术规范,该规范详细规定了视频显示设备和系统之间的通信协议,特别是关于视频显示设备的时间信息(timing)和扩展显示识别数据(Extended Display Identification Data,简称EDID)的结构与内容。 在视频显示技术领域,确保不同品牌、不同型号的显示设备之间能够正确交换信息是至关重要的,而这正是EIA-CEA 861B标准所解决的问题。它为制造商提供了一个统一的标准,以便设备能够互相识别和兼容。该标准对于确保设备能够正确配置分辨率、刷新率等参数至关重要。 ### 知识点详解 #### EIA-CEA 861B标准的历史和重要性 EIA-CEA 861B标准是随着数字视频接口(Digital Visual Interface,DVI)和后来的高带宽数字内容保护(High-bandwidth Digital Content Protection,HDCP)等技术的发展而出现的。该标准之所以重要,是因为它定义了电视、显示器和其他显示设备之间如何交互时间参数和显示能力信息。这有助于避免兼容性问题,并确保消费者能有较好的体验。 #### Timing信息 Timing信息指的是关于视频信号时序的信息,包括分辨率、水平频率、垂直频率、像素时钟频率等。这些参数决定了视频信号的同步性和刷新率。正确配置这些参数对于视频播放的稳定性和清晰度至关重要。EIA-CEA 861B标准规定了多种推荐的视频模式(如VESA标准模式)和特定的时序信息格式,使得设备制造商可以参照这些标准来设计产品。 #### EDID EDID是显示设备向计算机或其他视频源发送的数据结构,包含了关于显示设备能力的信息,如制造商、型号、支持的分辨率列表、支持的视频格式、屏幕尺寸等。这种信息交流机制允许视频源设备能够“了解”连接的显示设备,并自动设置最佳的输出分辨率和刷新率,实现即插即用(plug and play)功能。 EDID的结构包含了一系列的块(block),其中定义了包括基本显示参数、色彩特性、名称和序列号等在内的信息。该标准确保了这些信息能以一种标准的方式被传输和解释,从而简化了显示设置的过程。 #### EIA-CEA 861B标准的应用 EIA-CEA 861B标准不仅适用于DVI接口,还适用于HDMI(High-Definition Multimedia Interface)和DisplayPort等数字视频接口。这些接口技术都必须遵循EDID的通信协议,以保证设备间正确交换信息。由于标准的广泛采用,它已经成为现代视频信号传输和显示设备设计的基础。 #### EIA-CEA 861B标准的更新 随着技术的进步,EIA-CEA 861B标准也在不断地更新和修订。例如,随着4K分辨率和更高刷新率的显示技术的发展,该标准已经扩展以包括支持这些新技术的时序和EDID信息。任何显示设备制造商在设计新产品时,都必须考虑最新的EIA-CEA 861B标准,以确保兼容性。 #### 结论 EIA-CEA 861B标准是电子显示领域的一个重要规范,它详细定义了视频显示设备在通信时所使用的信号时序和设备信息的格式。该标准的存在,使得不同厂商生产的显示设备可以无缝连接和集成,极大地增强了用户体验。对于IT专业人士而言,了解和遵守EIA-CEA 861B标准是进行视频系统设计、故障诊断及设备兼容性测试的重要基础。