file-type

STL_TO_OBJ:二进制STL转OBJ格式工具介绍

RAR文件

3星 · 超过75%的资源 | 下载需积分: 49 | 816KB | 更新于2025-05-29 | 93 浏览量 | 81 下载量 举报 1 收藏
download 立即下载
标题中提到的“STL_TO_OBJ”指的是将STL(StereoLithography)格式的文件转换为OBJ(Object)格式的文件。STL格式通常用于3D打印,而OBJ格式则被广泛用于3D建模软件中,如3D Max。这样的转换过程对于确保在不同软件之间的兼容性以及模型的进一步编辑和处理是十分必要的。接下来,将详细解读此过程中涉及的技术和工具。 首先,关于STL文件,它是一种用于描述三维表面的文件格式,主要用于3D打印技术中。STL文件包含面(facets)数据,这些面代表了3D对象的几何信息。STL格式分为二进制和ASCII两种,其中二进制格式的STL文件在文件大小上更加紧凑,但是阅读和编辑不像ASCII格式那样直接。 其次,OBJ文件格式是一种更为通用的3D文件格式,它不仅可以包含几何信息(顶点和面),还可以包含材质、纹理坐标等信息。这使得OBJ格式成为许多3D建模软件的首选格式,因为它能够更好地记录和保留模型的完整信息。3D Max作为一个流行的3D建模和动画制作软件,就支持OBJ格式。 在描述中提到的“二进制的stl文件(3D打印文件)转obj格式(3Dmax可打开)”这一过程,暗示了需要进行文件格式转换,以使原本只能用于3D打印的模型能够被用于更为复杂的3D建模和动画制作。这一步骤在3D设计和打印流程中至关重要,因为它允许设计者在打印前对模型进行进一步的修改和完善。 关于【标签】中的“stl转obj”,这是指软件或工具的功能标签,表明该软件或工具的核心功能是将STL格式转换为OBJ格式。这种标签有助于快速识别软件的功能,并帮助用户找到适合自己需求的工具。 从【压缩包子文件的文件名称列表】可以看出,该压缩包中包含以下三个文件: - STO.exe:这是一个可执行文件,用户可以运行它来启动文件转换过程。这是用户与转换工具交互的主要方式。 - STO.sln:这是一个解决方案文件,通常由Visual Studio等集成开发环境(IDE)创建,用于管理项目文件和项目的配置。它可能包含了转换工具的源代码项目或编译后的项目配置。 - STO:虽然没有给出扩展名,但由于它与STO.exe和STO.sln并列,可以推测这是一个与转换工具相关的配置文件、资源文件或辅助工具。 对于想要执行STL到OBJ格式转换的用户来说,他们可能需要将STO.exe解压缩到一个文件夹中,然后运行它以开始转换过程。如果STO.exe是图形用户界面(GUI)程序,用户将看到一个界面来选择STL文件和目标OBJ文件保存的位置。如果STO.exe是命令行工具,用户可能需要在命令行界面中输入必要的参数和路径来进行转换。 转换过程可能涉及到对STL文件的解析,将二进制格式转换为内部数据结构,然后根据OBJ格式的要求重新构造几何信息、纹理信息和其他可能的元数据。这个过程可能还会涉及到3D图形处理方面的知识,如三角网格化、坐标变换等。 对于开发者来说,STO.sln文件是理解转换软件内部工作机制的关键。它允许开发者查看和修改源代码(假设这是一个开源工具),并重新编译软件来适应特定的需求。开发者可能会对STO.sln文件中包含的项目文件进行编辑,比如添加新的功能、优化性能或者修复已知的bug。 总之,从文件标题、描述、标签及文件名称列表来看,STL_TO_OBJ是一个专门的转换工具,能够帮助用户将3D打印模型文件STL转换为适合3D建模软件使用的OBJ文件。这一转换过程对于3D设计和打印流程的灵活性和高效性至关重要,而STO.exe、STO.sln、STO这几个文件构成了实现该转换功能的基础。

相关推荐

filetype

++ -c -include .pch/Qt5Core -pipe -O3 -std=c++11 -fvisibility=hidden -fvisibility-inlines-hidden -Wall -Wextra -Wvla -Wdate-time -Wshift-overflow=2 -Wduplicated-cond -Wno-stringop-overflow -Wno-format-overflow -D_REENTRANT -fPIC -DQT_NO_LINKED_LIST -DQT_NO_JAVA_STYLE_ITERATORS -DQT_NO_USING_NAMESPACE -DQT_NO_FOREACH -DELF_INTERPRETER=\"/lib/ld-linux-aarch64.so.1\" -DQT_NO_NARROWING_CONVERSIONS_IN_CONNECT -DQT_BUILD_CORE_LIB -DQT_BUILDING_QT -DQT_NO_CAST_TO_ASCII -DQT_ASCII_CAST_WARNINGS -DQT_MOC_COMPAT -DQT_USE_QSTRINGBUILDER -DQT_DEPRECATED_WARNINGS -DQT_DISABLE_DEPRECATED_BEFORE=0x050000 -DQT_DEPRECATED_WARNINGS_SINCE=0x060000 -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DPCRE2_CODE_UNIT_WIDTH=16 -I. -Iglobal -I../3rdparty/md5 -I../3rdparty/md4 -I../3rdparty/sha3 -I../3rdparty -I../3rdparty/double-conversion/include -I../3rdparty/harfbuzz/src -I../3rdparty/forkfd -I../3rdparty/tinycbor/src -I.rcc -I../../include -I../../include/QtCore -I../../include/QtCore/5.15.17 -I../../include/QtCore/5.15.17/QtCore -I.moc -I.tracegen -I../3rdparty/pcre2/src -I../../mkspecs/linux-g++ -o .obj/qabstractitemmodel.o itemmodels/qabstractitemmodel.cpp itemmodels/qabstractitemmodel.cpp: In member function ‘bool QPersistentModelIndex::operator<(const QPersistentModelIndex&) const’: itemmodels/qabstractitemmodel.cpp:218:22: error: wrong number of template arguments (0, should be 1) 218 | return std::less<>{}(d, other.d); | ^ In file included from /usr/include/c++/13/string:49, from ../../include/QtCore/../../src/corelib/text/qbytearray.h:52, from ../../include/QtCore/qbytearray.h:1, from ../../include/QtCore/../../src/corelib/text/qstring.h:50, from ../../include/QtCore/qstring.h:1, from ../../include/QtCore/../../src/corelib/kernel/qcoreapplication.h:44, from ../../include/QtCore/qcoreapplication.h:1, from global/qt_pch.h:66: /usr/include/c++/13/bits/stl_function.h:403:12: note: provided for ‘template<class _Tp> struct std::less’ 403 | struct less : public binary_function<_Tp, _Tp, bool> | ^~~~ make[3]: *** [Makefile:41096:.obj/qabstractitemmodel.o] 错误 1 make[3]: *** 正在等待未完成的任务.... make[3]: 离开目录“/opt/qt-everywhere-src-5.15.17/qtbase/src/corelib” make[2]: *** [Makefile:202:sub-corelib-make_first] 错误 2 make[2]: 离开目录“/opt/qt-everywhere-src-5.15.17/qtbase/src” make[1]: *** [Makefile:51:sub-src-make_first] 错误 2 make[1]: 离开目录“/opt/qt-everywhere-src-5.15.17/qtbase” make: *** [Makefile:85:module-qtbase] 错误 2

filetype

我的这个代码里面已经正确定义了如何读取STL文件并合并,以及如何读取一个个文件夹的图像数据(这部分你直接用我这个代码的。)我需要你改进的是calibration算法,也就是如何正确匹配3D和2D点,并计算intrinsic matrix。然后需要将每一张图像匹配的结果像这个代码一样可视化,左图是stl文件渲染图,右图是读取图像,然后特征点连线一下 #!/usr/bin/env python3 """ Intrinsic calibration & paired-image visualisation for the da Vinci LND tool Author: Wenzheng Cheng | last update 2025-06-12 左图完全复用 lnd.py 渲染逻辑(1000×800, elev 0, azim 0, roll 120)。 右图 = mask 等比 resize → 1000×800。 输出: *_pair.jpg (左渲右 mask + 彩线匹配) """ import os, argparse, math, xml.etree.ElementTree as ET import cv2, trimesh, numpy as np import matplotlib matplotlib.use("Agg") import matplotlib.pyplot as plt from matplotlib.backends.backend_agg import FigureCanvasAgg from mpl_toolkits.mplot3d.art3d import Poly3DCollection from natsort import natsorted from tqdm import tqdm # ------------------------------------------------- # # ------------ 常量与 LND 白名单 ------------------- # # ------------------------------------------------- # LND_XML_PATH = "/home/iulian/chole_ws/src/drrobot/mujoco_menagerie/lnd/lnd.xml" LND_ASSET_DIR = "/home/iulian/chole_ws/src/drrobot/mujoco_menagerie/lnd/assets" WL = {"jaw_1","jaw_2","jaw_pad_1","jaw_pad_2", "pitch_mech","pitch_screw", "pitch_wheel_1","pitch_wheel_2","pitch_wheel_3","pitch_wheel_4", "wheel_1","wheel_2","wheel_3","wheel_4","yaw_screw"} # 左图渲染固定参数 —— 与 lnd.py 完全一致 RENDER_W, RENDER_H = 1000, 800 CAM_ELEV, CAM_AZIM, CAM_ROLL = 0.0, 0.0, 120.0 # ------------------------------------------------- # # ---------------- LND 载入 + 渲染 ------------------ # # ------------------------------------------------- # def parse(xml, asset_dir=LND_ASSET_DIR): root = ET.parse(xml).getroot() return [os.path.join(asset_dir, m.get("file")) for m in root.findall(".//asset/mesh") if m.get("name") in WL] def load_merge(paths): meshes = [] for p in paths: m = trimesh.load_mesh(p, process=False) if isinstance(m, trimesh.Scene): m = trimesh.util.concatenate(tuple(m.geometry.values())) m.apply_scale(1000.0) # m → mm meshes.append(m) return trimesh.util.concatenate(meshes) # === lnd.py 原汁渲染 === def _plot_trimesh(ax, mesh): try: tgt = max(10_000, int(len(mesh.faces)*0.3)) mesh_sub = mesh.simplify_quadratic_decimation(tgt) except Exception: mesh_sub = mesh v, f = mesh_sub.vertices, mesh_sub.faces ax.add_collection3d(Poly3DCollection( v[f], facecolor=[.8,.8,.8], edgecolor=[.4,.4,.4], linewidth=0.15)) span = v.max(0) - v.min(0); cen = v.mean(0); R = span.max()*0.6 for setter,c in zip([ax.set_xlim,ax.set_ylim,ax.set_zlim], cen): setter(c-R, c+R) def render_lnd(mesh): fig = plt.figure(figsize=(RENDER_W/100, RENDER_H/100), dpi=100, facecolor="black") ax = fig.add_subplot(111, projection='3d', facecolor="black") ax.view_init(elev=CAM_ELEV, azim=CAM_AZIM, roll=CAM_ROLL) ax.axis('off') _plot_trimesh(ax, mesh) plt.tight_layout(pad=0) canvas = FigureCanvasAgg(fig); canvas.draw() # Matplotlib ≥3.8:改用 buffer_rgba(),再丢掉 alpha 通道 buf = np.asarray(canvas.buffer_rgba()) # shape (H,W,4) img = buf[...,:3].copy() # → uint8 RGB plt.close(fig) return img # ------------------------------------------------- # # ----------- 数学 / 采样 / PnP 工具函数 ----------- # # ------------------------------------------------- # def view_to_rvec(elev, azim, roll): def Rz(t): return np.array([[ math.cos(t),-math.sin(t),0], [ math.sin(t), math.cos(t),0], [0,0,1]]) def Rx(t): return np.array([[1,0,0], [0, math.cos(t),-math.sin(t)], [0, math.sin(t), math.cos(t)]]) R = Rz(np.radians(azim)) @ Rx(np.radians(elev)) @ Rz(np.radians(roll)) return cv2.Rodrigues(R)[0].astype(np.float32) def sample_surface(mesh, n): pts,_ = trimesh.sample.sample_surface(mesh, n) return pts.astype(np.float32) def uniform_mask_points(mask, max_n): ys,xs = np.where(mask>0) if len(xs)==0: return np.empty((0,2),np.float32) if len(xs)>max_n: sel = np.random.choice(len(xs), max_n, False) xs,ys = xs[sel], ys[sel] pts = np.stack([xs,ys],1).astype(np.float32) pts += np.random.rand(*pts.shape)-0.5 return pts def pnp(obj,img,K): ok,r,t,_ = cv2.solvePnPRansac(obj,img,K,None, flags=cv2.SOLVEPNP_EPNP,iterationsCount=800,reprojectionError=3) if not ok: raise RuntimeError("PnP fail") return r,t def mask_consistent(mask,r,t,K,pts3,max_out=1200): proj,_ = cv2.projectPoints(pts3,r,t,K,None) proj = proj.reshape(-1,2).astype(int) h,w = mask.shape good = (proj[:,0]>=0)&(proj[:,0]<w)&(proj[:,1]>=0)&(proj[:,1]<h) proj,obj = proj[good], pts3[good] keep = mask[proj[:,1],proj[:,0]]>0 obj,proj = obj[keep], proj[keep].astype(np.float32) if len(obj)>max_out: sel = np.random.choice(len(obj), max_out, False) obj,proj = obj[sel], proj[sel] return obj,proj # ------------------------------------------------- # # ---------------- 可视化 (单图) ------------------ # # ------------------------------------------------- # def scale_pts(pts, sx, sy): return (pts * np.array([[sx,sy]])).astype(int) def draw_pair(mask, proj, img_pts, dense_proj, save_path, lnd_img): # 1) 把 mask resize → 左图同分辨率 mask_resized = cv2.resize(mask, (RENDER_W, RENDER_H), interpolation=cv2.INTER_NEAREST) right_img = cv2.cvtColor(mask_resized, cv2.COLOR_GRAY2BGR) canvas = np.concatenate([lnd_img, right_img], axis=1) # 2) 坐标缩放系数 h0,w0 = mask.shape sx, sy = RENDER_W / w0, RENDER_H / h0 img_pts_s = scale_pts(img_pts, sx, sy) + np.array([RENDER_W,0]) proj_s = scale_pts(proj, sx, sy) dense_scaled= scale_pts(dense_proj, sx, sy) for p in dense_scaled: cv2.circle(canvas, tuple(p), 1, (80,80,80), -1) rng = np.random.RandomState(0) for (x1,y1),(x2,y2),c in zip(proj_s, img_pts_s, rng.randint(0,255,(len(img_pts_s),3)).tolist()): cv2.circle(canvas,(x1,y1),3,c,-1) cv2.circle(canvas,(x2,y2),3,c,-1) cv2.line(canvas,(x1,y1),(x2,y2),c,1) cv2.imwrite(save_path, canvas) # ------------------------------------------------- # # ----------------- 数据集遍历工具 ----------------- # # ------------------------------------------------- # # ---------- 路径同时兼容 seg_masks 与 left_img_dir ---------- def collect_imgs(video): seg = os.path.join(video, "seg_masks") if os.path.isdir(os.path.join(video, "seg_masks")) else os.path.join(video, "left_img_dir") return [os.path.join(seg, f) for f in natsorted(os.listdir(seg)) if f.lower().endswith((".png", ".jpg", ".jpeg", ".bmp"))] def iterate(root): return [os.path.join(root,d) for d in natsorted(os.listdir(root)) if os.path.isdir(os.path.join(root,d))] # ---------- 灰度角点替代 uniform 像素 ---------- def detect_corners(gray, n): c = cv2.goodFeaturesToTrack(gray, maxCorners=n, qualityLevel=0.01, minDistance=3) return np.empty((0, 2), np.float32) if c is None else c.reshape(-1, 2).astype(np.float32) def scale_to_canvas(pts, w, h): if len(pts) == 0: return pts lo, hi = pts.min(0), pts.max(0) c = (lo + hi) / 2; span = np.clip(hi - lo, 1e-3, None) s = min(0.85 * w / span[0], 0.85 * h / span[1]) return (pts - c) * s + np.array([w / 2, h / 2]) # ------------------------------------------------- # # --------------------------- main ---------------- # # ------------------------------------------------- # def main(): ag = argparse.ArgumentParser() ag.add_argument("--path", required=True) ag.add_argument("--vis_dir", default="") ag.add_argument("--samples", type=int, default=10_000) ag.add_argument("--max_pts", type=int, default=800) args = ag.parse_args() if args.vis_dir: os.makedirs(args.vis_dir, exist_ok=True) mesh = load_merge(parse(LND_XML_PATH)) lnd_img = render_lnd(mesh) dense_pts = sample_surface(mesh, args.samples) rvec_fixed = view_to_rvec(CAM_ELEV,CAM_AZIM,CAM_ROLL) tvec_zero = np.zeros((3,1),np.float32) for vid in iterate(args.path): paths = collect_imgs(vid) if not paths: continue first_rgb = cv2.imread(paths[0]); h0, w0 = first_rgb.shape[:2] K0 = np.array([[0.8 * w0, 0, w0 / 2], [0, 0.8 * w0, h0 / 2], [0, 0, 1]], float) dense_proj0, _ = cv2.projectPoints(dense_pts, rvec_fixed, tvec_zero, K0, None) dense_proj0 = dense_proj0.reshape(-1, 2) obj_list, img_list = [], [] for p in tqdm(paths, desc=os.path.basename(vid)): rgb = cv2.imread(p); gray = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY) img_pts = detect_corners(gray, args.max_pts) if len(img_pts) < 6: continue obj_guess = dense_pts[np.random.choice(len(dense_pts), len(img_pts), False)] try: r, t = pnp(obj_guess, img_pts, K0) except RuntimeError: continue full_mask = np.ones_like(gray, np.uint8) * 255 # 让 mask_consistent 不过滤 obj, img_pts_f = mask_consistent(full_mask, r, t, K0, dense_pts, 1200) if len(obj) < 6: continue obj_list.append(obj); img_list.append(img_pts_f) if args.vis_dir: proj, _ = cv2.projectPoints(obj, r, t, K0, None) proj = proj.reshape(-1, 2) fname = os.path.splitext(os.path.basename(p))[0] right_vis = cv2.resize(rgb, (RENDER_W, RENDER_H)) canvas = np.concatenate([lnd_img, right_vis], 1) proj_s = scale_to_canvas(proj, RENDER_W, RENDER_H).astype(int) img_s = scale_to_canvas(img_pts_f, RENDER_W, RENDER_H).astype(int) + np.array([RENDER_W, 0]) for (x1, y1), (x2, y2) in zip(proj_s, img_s): cv2.circle(canvas, (x1, y1), 3, (0, 255, 0), -1) cv2.circle(canvas, (x2, y2), 3, (0, 255, 0), -1) cv2.line(canvas, (x1, y1), (x2, y2), (0, 255, 0), 1) cv2.imwrite(os.path.join(args.vis_dir, f"{fname}_pair.jpg"), canvas) if len(obj_list)<3: continue flag = getattr(cv2,"CALIB_FIX_SKEW",0) flags = cv2.CALIB_USE_INTRINSIC_GUESS|flag|cv2.CALIB_ZERO_TANGENT_DIST|\ cv2.CALIB_FIX_K3|cv2.CALIB_FIX_K4|cv2.CALIB_FIX_K5|cv2.CALIB_FIX_K6 rms,K,dist,*_ = cv2.calibrateCamera( obj_list,img_list,(w0,h0),K0,None,flags=flags, criteria=(cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_COUNT,100,1e-6)) print(f"\n[VIDEO] {os.path.basename(vid)} RMS={rms:.3f}px") print("K=\n",K,"\n(k1,k2)=",dist.ravel()[:2]) if __name__ == "__main__": #python intrinsic_matrix.py --path /home/iulian/chole_ws/data/tissue_lift/tissue_1/tissue_lift/ #python intrinsic_matrix.py --path /home/iulian/chole_ws/data/lift/tissue_1/lift/ --vis_dir /home/iulian/chole_ws/src/drrobot/K_vis --max_pts 800 #python intrinsic_matrix.py --path /home/iulian/chole_ws/data/needle_pickup/tissue_1/needle_pickup/ #python intrinsic_matrix.py --path /home/iulian/chole_ws/data/check --vis_dir /home/iulian/chole_ws/src/drrobot/K_vis --max_pts 800 main()

cxunmouse
  • 粉丝: 0
上传资源 快速赚钱