活动介绍

import nest_asyncio nest_asyncio.apply() import collections import numpy as np import tensorflow as tf import tensorflow_federated as tff np.random.seed(0) tff.federated_computation(lambda: 'Hello, World!')() emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data(cache_dir = '/home/cqx/PycharmProjects/cache/fed_emnist_digitsonly') example_dataset = emnist_train.create_tf_dataset_for_client( emnist_train.client_ids[0]) example_element = next(iter(example_dataset)) example_element['label'].numpy()当把数据集换成本地fashion_mnist时,代码该如何修改

时间: 2023-06-08 17:07:41 浏览: 382
如果要将数据集从emnist_train更改为fashion_mnist,则需要对代码进行以下更改: 1. 导入fashion_mnist数据集 ``` from tensorflow.keras.datasets import fashion_mnist ``` 2. 加载fashion_mnist数据集 ``` (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() ``` 3. 将数据集转换为TFF格式 ``` def preprocess_fn(images, labels): images = tf.expand_dims(images, axis=-1) images = tf.cast(images, tf.float32) / 255.0 labels = tf.one_hot(labels, 10) return collections.OrderedDict(x=images, y=labels) def make_federated_data(client_data, client_ids): return [client_data.create_tf_dataset_for_client(client_id) .map(preprocess_fn) .shuffle(500) .repeat(NUM_EPOCHS) .batch(BATCH_SIZE) .prefetch(PREFETCH_BUFFER) for client_id in client_ids] train_data = make_federated_data(fashion_mnist, client_ids) ``` 在替换原始数据集之后,这些更改将确保新数据集与代码一起工作。
阅读全文

相关推荐

帮我看看为什么报错ImportError: DLL load failed while importing _multiarray_umath: 找不到指定的模块。 ImportError: DLL load failed while importing _multiarray_umath: 找不到指定的模块。 ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import Traceback (most recent call last): File "D:\PyProjects\TF2\test.py", line 1, in <module> import tensorflow as tf File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\__init__.py", line 41, in <module> from tensorflow.python.tools import module_util as _module_util File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\__init__.py", line 46, in <module> from tensorflow.python import data File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\__init__.py", line 25, in <module> from tensorflow.python.data import experimental File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\__init__.py", line 97, in <module> from tensorflow.python.data.experimental import service File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py", line 353, in <module> from tensorflow.python.data.experimental.ops.data_service_ops import distribute File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py", line 26, in <module> from tensorflow.python.data.experimental.ops import compression_ops File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py", line 20, in <module> from tensorflow.python.data.util import structure File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\util\structure.py", line 26, in <module> from tensorflow.python.data.util import nest File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\util\nest.py", line 40, in <module> from tensorflow.python.framework import sparse_tensor as _sparse_tensor File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\sparse_tensor.py", line 28, in <module> from tensorflow.python.framework import constant_op File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\constant_op.py", line 29, in <module> from tensorflow.python.eager import execute File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\eager\execute.py", line 27, in <module> from tensorflow.python.framework import dtypes File "D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py", line 33, in <module> _np_bfloat16 = _pywrap_bfloat16.TF_bfloat16_type() TypeError: Unable to convert function return value to a Python type! The signature was () -> handle

Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\__init__.py", line 37, in <module> from tensorflow.python.tools import module_util as _module_util File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\__init__.py", line 42, in <module> from tensorflow.python import data File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\__init__.py", line 21, in <module> from tensorflow.python.data import experimental File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\__init__.py", line 96, in <module> from tensorflow.python.data.experimental import service File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py", line 419, in <module> from tensorflow.python.data.experimental.ops.data_service_ops import distribute File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py", line 24, in <module> from tensorflow.python.data.experimental.ops import compression_ops File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py", line 16, in <module> from tensorflow.python.data.util import structure File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\util\structure.py", line 23, in <module> from tensorflow.python.data.util import nest File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\data\util\nest.py", line 36, in <module> from tensorflow.python.framework import sparse_tensor as _sparse_tensor File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\sparse_tensor.py", line 24, in <module> from tensorflow.python.framework import constant_op File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\constant_op.py", line 25, in <module> from tensorflow.python.eager import execute File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\eager\execute.py", line 23, in <module> from tensorflow.python.framework import dtypes File "C:\ProgramData\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py", line 34, in <module> _np_bfloat16 = _pywrap_bfloat16.TF_bfloat16_type() TypeError: Unable to convert function return value to a Python type! The signature was () -> handle

C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\python.exe D:\智能门禁人脸识别系统\FaceMaskRecog\UI.py 2025-03-10 22:07:48.885086: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. Traceback (most recent call last): File "D:\智能门禁人脸识别系统\FaceMaskRecog\UI.py", line 25, in <module> from model.SSD import FaceMaskDetection File "D:\智能门禁人脸识别系统\FaceMaskRecog\model\SSD.py", line 2, in <module> import tensorflow File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\__init__.py", line 49, in <module> from tensorflow._api.v2 import __internal__ File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\__internal__\__init__.py", line 8, in <module> from tensorflow._api.v2.__internal__ import autograph File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\__internal__\autograph\__init__.py", line 8, in <module> from tensorflow.python.autograph.core.ag_ctx import control_status_ctx # line: 34 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\autograph\core\ag_ctx.py", line 21, in <module> from tensorflow.python.autograph.utils import ag_logging File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\autograph\utils\__init__.py", line 17, in <module> from tensorflow.python.autograph.utils.context_managers import control_dependency_on_returns File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\autograph\utils\context_managers.py", line 19, in <module> from tensorflow.python.framework import ops File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\framework\ops.py", line 50, in <module> from tensorflow.python.eager import context File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\eager\context.py", line 37, in <module> from tensorflow.python.eager import execute File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\eager\execute.py", line 23, in <module> from tensorflow.python.framework import tensor_shape File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 26, in <module> from tensorflow.python.saved_model import nested_structure_coder File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\saved_model\nested_structure_coder.py", line 38, in <module> from tensorflow.python.util import nest File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\util\nest.py", line 95, in <module> from tensorflow.python.util import nest_util File "C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\util\nest_util.py", line 30, in <module> import wrapt as _wrapt File "C:\Users\潘兰香\AppData\Roaming\Python\Python312\site-packages\wrapt\__init__.py", line 10, in <module> from .decorators import (adapter_factory, AdapterFactory, decorator, File "C:\Users\潘兰香\AppData\Roaming\Python\Python312\site-packages\wrapt\decorators.py", line 34, in <module> from inspect import ismethod, isclass, formatargspec ImportError: cannot import name 'formatargspec' from 'inspect' (C:\Users\潘兰香\AppData\Local\Programs\Python\Python312\Lib\inspect.py). Did you mean: 'formatargvalues'?

源代码import jieba import numpy as np import matplotlib.pyplot as plt from keras.models import Model from keras.layers import Input, Embedding, Bidirectional, LSTM, Dense, Dropout from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils import to_categorical from keras import backend as K from tensorflow import kerasimport warnings from keras.src.api_export import keras_export from keras.src.utils.module_utils import dmtree from keras.src.utils.module_utils import optree if optree.available: from keras.src.tree import optree_impl as tree_impl elif dmtree.available: from keras.src.tree import dmtree_impl as tree_impl else: raise ImportError( "To use Keras, you need to have optree installed. " "Install it via pip install optree" ) def register_tree_node_class(cls): return tree_impl.register_tree_node_class(cls) @keras_export("keras.tree.MAP_TO_NONE") class MAP_TO_NONE: """Special value for use with traverse().""" pass @keras_export("keras.tree.is_nested") def is_nested(structure): """Checks if a given structure is nested. Examples: >>> keras.tree.is_nested(42) False >>> keras.tree.is_nested({"foo": 42}) True Args: structure: A structure to check. Returns: True if a given structure is nested, i.e. is a sequence, a mapping, or a namedtuple, and False otherwise. """ return tree_impl.is_nested(structure) @keras_export("keras.tree.traverse") def traverse(func, structure, top_down=True): """Traverses the given nested structure, applying the given function. The traversal is depth-first. If top_down is True (default), parents are returned before their children (giving the option to avoid traversing into a sub-tree). Examples: >>> v = [] >>> keras.tree.traverse(v.append, [(1, 2), [3], {"a": 4}], top_down=True) [(1, 2), [3], {'a': 4}] >>> v [[(1

TypeError: in user code: TypeError: outer_factory.<locals>.inner_factory.<locals>.tf__combine_features() takes 1 positional argument but 2 were given,下面是这个出现这个问题的原代码,你帮我修改一下import os import re import glob import tensorflow as tf import numpy as np from tqdm import tqdm import matplotlib.pyplot as plt import matplotlib as mpl from sklearn.model_selection import train_test_split import imageio import sys from skimage.transform import resize import traceback from tensorflow.keras import layers, models, Input, Model from tensorflow.keras.optimizers import Adam from pathlib import Path from tensorflow.keras.losses import MeanSquaredError from tensorflow.keras.metrics import MeanAbsoluteError from skimage import measure, morphology, filters # =============== 配置参数===================================== BASE_DIR = "F:/2025.7.2wavelengthtiff" # 根目录路径 START_WAVELENGTH = 788.55500 # 起始波长 END_WAVELENGTH = 788.55600 # 结束波长 STEP = 0.00005 # 波长步长 BATCH_SIZE = 8 # 批处理大小 IMAGE_SIZE = (256, 256) # 图像尺寸 TARGET_CHANNELS = 1 # 目标通道数 - 使用灰度图像 TEST_SIZE = 0.2 # 测试集比例 RANDOM_SEED = 42 # 随机种子 MODEL_SAVE_PATH = Path.home() / "Documents" / "wavelength_model.keras" # 修改为.keras格式 # ================================================================ # 设置中文字体支持 try: mpl.rcParams['font.sans-serif'] = ['SimHei'] # 使用黑体 mpl.rcParams['axes.unicode_minus'] = False # 解决负号显示问题 print("已设置中文字体支持") except: print("警告:无法设置中文字体,图表可能无法正确显示中文") def generate_folder_names(start, end, step): """生成波长文件夹名称列表""" num_folders = int((end - start) / step) + 1 folder_names = [] for i in range(num_folders): wavelength = start + i * step folder_name = f"{wavelength:.5f}" folder_names.append(folder_name) return folder_names def find_best_match_file(folder_path, target_wavelength): """在文件夹中找到波长最接近目标值的TIFF文件""" tiff_files = glob.glob(os.path.join(folder_path, "*.tiff")) + glob.glob(os.path.join(folder_path, "*.tif")) if not tiff_files: return None best_match = None min_diff = float('inf') for file_path in tiff_files: filename = os.path.basename(file_path) match = re.search(r'\s*([\d.]+)_', filename) if not match: continue try: file_wavelength = float(match.group(1)) diff = abs(file_wavelength - target_wavelength) if diff < min_diff: min_diff = diff best_match = file_path except ValueError: continue return best_match def extract_shape_features(binary_image): """从二值化图像中提取形状和边界特征""" features = np.zeros(6, dtype=np.float32) # 初始化特征向量 try: contours = measure.find_contours(binary_image, 0.5) if not contours: return features main_contour = max(contours, key=len) contour_length = len(main_contour) label_image = morphology.label(binary_image) region = measure.regionprops(label_image)[0] contour_area = region.area hull = morphology.convex_hull_image(label_image) hull_area = np.sum(hull) solidity = region.solidity eccentricity = region.eccentricity orientation = region.orientation features[0] = contour_length / 1000 # 归一化 features[1] = contour_area / 1000 # 归一化 features[2] = solidity features[3] = eccentricity features[4] = orientation features[5] = hull_area / 1000 # 凸包面积 except Exception as e: print(f"形状特征提取错误: {e}") traceback.print_exc() return features def load_and_preprocess_image(file_path): """加载并预处理TIFF图像""" try: image = imageio.imread(file_path) if image.dtype == np.uint16: image = image.astype(np.float32) / 65535.0 elif image.dtype == np.uint8: image = image.astype(np.float32) / 255.0 else: image = image.astype(np.float32) if np.max(image) > 1.0: image = image / np.max(image) if len(image.shape) == 3 and image.shape[2] > 1: image = 0.299 * image[:, :, 0] + 0.587 * image[:, :, 1] + 0.114 * image[:, :, 2] image = np.expand_dims(image, axis=-1) image = resize(image, (IMAGE_SIZE[0], IMAGE_SIZE[1]), anti_aliasing=True) blurred = filters.gaussian(image[..., 0], sigma=1.0) thresh = filters.threshold_otsu(blurred) binary = blurred > thresh * 0.8 return image, binary except Exception as e: print(f"图像加载失败: {e}, 使用空白图像代替") return np.zeros((IMAGE_SIZE[0], IMAGE_SIZE[1], 1), dtype=np.float32), np.zeros((IMAGE_SIZE[0], IMAGE_SIZE[1]), dtype=np.bool) def create_tiff_dataset(file_paths): """从文件路径列表创建TensorFlow数据集""" dataset = tf.data.Dataset.from_tensor_slices(file_paths) def load_wrapper(file_path): file_path_str = file_path.numpy().decode('utf-8') image, binary = load_and_preprocess_image(file_path_str) features = extract_shape_features(binary) return image, features def tf_load_wrapper(file_path): result = tf.py_function( func=load_wrapper, inp=[file_path], Tout=(tf.float32, tf.float32) ) image = result[0] features = result[1] image.set_shape((IMAGE_SIZE[0], IMAGE_SIZE[1], 1)) # 单通道 features.set_shape((6,)) # 6个形状特征 return image, features dataset = dataset.map(tf_load_wrapper, num_parallel_calls=tf.data.experimental.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE).prefetch(tf.data.experimental.AUTOTUNE) return dataset def load_and_prepare_data(): """加载所有数据并准备训练/测试集""" folder_names = generate_folder_names(START_WAVELENGTH, END_WAVELENGTH, STEP) print(f"\n生成的文件夹数量: {len(folder_names)}") print(f"起始文件夹: {folder_names[0]}") print(f"结束文件夹: {folder_names[-1]}") valid_files = [] wavelengths = [] print("\n扫描文件夹并匹配文件...") for folder_name in tqdm(folder_names, desc="处理文件夹"): folder_path = os.path.join(BASE_DIR, folder_name) if not os.path.isdir(folder_path): continue try: target_wavelength = float(folder_name) file_path = find_best_match_file(folder_path, target_wavelength) if file_path: valid_files.append(file_path) wavelengths.append(target_wavelength) except ValueError: continue print(f"\n找到的有效文件: {len(valid_files)}/{len(folder_names)}") if not valid_files: raise ValueError("未找到任何有效文件,请检查路径和文件夹名称") wavelengths = np.array(wavelengths) min_wavelength = np.min(wavelengths) max_wavelength = np.max(wavelengths) wavelength_range = max_wavelength - min_wavelength wavelengths_normalized = (wavelengths - min_wavelength) / wavelength_range print(f"波长范围: {min_wavelength:.6f} 到 {max_wavelength:.6f}, 范围大小: {wavelength_range:.6f}") train_files, test_files, train_wavelengths, test_wavelengths = train_test_split( valid_files, wavelengths_normalized, test_size=TEST_SIZE, random_state=RANDOM_SEED ) print(f"训练集大小: {len(train_files)}") print(f"测试集大小: {len(test_files)}") train_dataset = create_tiff_dataset(train_files) test_dataset = create_tiff_dataset(test_files) train_labels = tf.data.Dataset.from_tensor_slices(train_wavelengths) test_labels = tf.data.Dataset.from_tensor_slices(test_wavelengths) # 修复后的 combine_features 函数 def combine_features(data): """将图像特征和标签组合成模型需要的格式""" image_features, label = data image, shape_features = image_features return (image, shape_features), label train_dataset_unet = tf.data.Dataset.zip((train_dataset, train_labels)).map( combine_features, num_parallel_calls=tf.data.experimental.AUTOTUNE ) test_dataset_unet = tf.data.Dataset.zip((test_dataset, test_labels)).map( combine_features, num_parallel_calls=tf.data.experimental.AUTOTUNE ) train_dataset_cnn_dense = tf.data.Dataset.zip((train_dataset.map(lambda x: x[0]), train_labels)) test_dataset_cnn_dense = tf.data.Dataset.zip((test_dataset.map(lambda x: x[0]), test_labels)) return train_dataset_unet, test_dataset_unet, train_dataset_cnn_dense, test_dataset_cnn_dense, valid_files, min_wavelength, wavelength_range def build_unet_model(input_shape, shape_feature_size): """构建 U-Net 模型,同时接受图像输入和形状特征输入""" # 图像输入 image_input = Input(shape=input_shape, name='image_input') # Encoder conv1 = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(image_input) conv1 = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(conv1) pool1 = layers.MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) conv2 = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(conv2) pool2 = layers.MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = layers.Conv2D(128, (3, 3), activation='relu', padding='same')(pool2) conv3 = layers.Conv2D(128, (3, 3), activation='relu', padding='same')(conv3) pool3 = layers.MaxPooling2D(pool_size=(2, 2))(conv3) # Bottleneck conv4 = layers.Conv2D(256, (3, 3), activation='relu', padding='same')(pool3) conv4 = layers.Conv2D(256, (3, 3), activation='relu', padding='same')(conv4) drop4 = layers.Dropout(0.5)(conv4) # Decoder up5 = layers.Conv2D(128, (2, 2), activation='relu', padding='same')(layers.UpSampling2D(size=(2, 2))(drop4)) merge5 = layers.Concatenate()([conv3, up5]) conv5 = layers.Conv2D(128, (3, 3), activation='relu', padding='same')(merge5) conv5 = layers.Conv2D(128, (3, 3), activation='relu', padding='same')(conv5) up6 = layers.Conv2D(64, (2, 2), activation='relu', padding='same')(layers.UpSampling2D(size=(2, 2))(conv5)) merge6 = layers.Concatenate()([conv2, up6]) conv6 = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(merge6) conv6 = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(conv6) up7 = layers.Conv2D(32, (2, 2), activation='relu', padding='same')(layers.UpSampling2D(size=(2, 2))(conv6)) merge7 = layers.Concatenate()([conv1, up7]) conv7 = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(merge7) conv7 = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(conv7) # 形状特征输入 shape_input = Input(shape=(shape_feature_size,), name='shape_input') shape_dense = layers.Dense(128, activation='relu')(shape_input) shape_dense = layers.Dense(64, activation='relu')(shape_dense) # 合并图像特征和形状特征 flat = layers.GlobalAveragePooling2D()(conv7) combined = layers.Concatenate()([flat, shape_dense]) # 输出层 outputs = layers.Dense(1, activation='linear')(combined) model = Model(inputs=[image_input, shape_input], outputs=outputs) model.compile(optimizer=Adam(learning_rate=1e-4), loss='mean_squared_error', metrics=['mae']) return model def build_dense_model(input_shape): """构建简单的全连接网络""" inputs = Input(shape=input_shape) x = layers.Flatten()(inputs) x = layers.Dense(128, activation='relu')(x) x = layers.Dense(64, activation='relu')(x) outputs = layers.Dense(1, activation='linear')(x) model = Model(inputs=[inputs], outputs=[outputs]) model.compile(optimizer=Adam(learning_rate=1e-4), loss='mean_squared_error', metrics=['mae']) return model def build_cnn_model(input_shape): """构建传统的 CNN 模型""" inputs = Input(shape=input_shape) x = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(inputs) x = layers.MaxPooling2D((2, 2))(x) x = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(x) x = layers.MaxPooling2D((2, 2))(x) x = layers.Conv2D(128, (3, 3), activation='relu', padding='same')(x) x = layers.MaxPooling2D((2, 2))(x) x = layers.Conv2D(256, (3, 3), activation='relu', padding='same')(x) x = layers.GlobalAveragePooling2D()(x) x = layers.Dense(256, activation='relu')(x) outputs = layers.Dense(1, activation='linear')(x) model = Model(inputs=[inputs], outputs=[outputs]) model.compile(optimizer=Adam(learning_rate=1e-4), loss='mean_squared_error', metrics=['mae']) return model def train_models(train_dataset_unet, test_dataset_unet, train_dataset_cnn_dense, test_dataset_cnn_dense, input_shape, shape_feature_size, wavelength_range): """训练多个模型""" unet_model = build_unet_model(input_shape, shape_feature_size) dense_model = build_dense_model(input_shape) cnn_model = build_cnn_model(input_shape) unet_model.summary() dense_model.summary() cnn_model.summary() callbacks = [ tf.keras.callbacks.EarlyStopping(patience=30, restore_best_weights=True, monitor='val_loss', min_delta=1e-6), tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=10, min_lr=1e-7) ] unet_history = unet_model.fit(train_dataset_unet, epochs=300, validation_data=test_dataset_unet, callbacks=callbacks, verbose=2) dense_history = dense_model.fit(train_dataset_cnn_dense, epochs=300, validation_data=test_dataset_cnn_dense, callbacks=callbacks, verbose=2) cnn_history = cnn_model.fit(train_dataset_cnn_dense, epochs=300, validation_data=test_dataset_cnn_dense, callbacks=callbacks, verbose=2) return unet_model, dense_model, cnn_model def predict_with_voting(models, test_image_path, min_wavelength, wavelength_range): """使用多个模型进行预测,并通过投票机制决定最终结果""" image, binary = load_and_preprocess_image(test_image_path) image = np.expand_dims(image, axis=0) shape_features = extract_shape_features(binary) shape_features = np.expand_dims(shape_features, axis=0) predictions = [] for model in models: if model.name == 'unet_model': predicted_normalized = model.predict([image, shape_features], verbose=0)[0][0] else: predicted_normalized = model.predict(image, verbose=0)[0][0] predicted_wavelength = predicted_normalized * wavelength_range + min_wavelength predictions.append(predicted_wavelength) # 使用投票机制(例如取平均值) final_prediction = np.mean(predictions) print(f"最终预测波长: {final_prediction:.8f} 纳米") return final_prediction def main(): """主函数""" print(f"TensorFlow 版本: {tf.__version__}") try: train_dataset_unet, test_dataset_unet, train_dataset_cnn_dense, test_dataset_cnn_dense, all_files, min_wavelength, wavelength_range = load_and_prepare_data() print(f"最小波长: {min_wavelength:.6f}, 波长范围: {wavelength_range:.6f}") except Exception as e: print(f"数据加载失败: {str(e)}") traceback.print_exc() return print("\n开始训练模型...") try: unet_model, dense_model, cnn_model = train_models(train_dataset_unet, test_dataset_unet, train_dataset_cnn_dense, test_dataset_cnn_dense, (IMAGE_SIZE[0], IMAGE_SIZE[1], 1), 6, wavelength_range) except Exception as e: print(f"模型训练失败: {str(e)}") traceback.print_exc() return print("\n从测试集中随机选择一张图片进行预测...") try: for (images, features), labels in test_dataset_unet.take(1): if images.shape[0] > 0: test_image = images[0].numpy() test_features = features[0].numpy() labels_np = labels.numpy() if labels_np.ndim == 0: true_wavelength_normalized = labels_np.item() else: true_wavelength_normalized = labels_np[0] true_wavelength = true_wavelength_normalized * wavelength_range + min_wavelength test_image_path = "f:/phD/代码/test_image.tiff" imageio.imwrite(test_image_path, (test_image[..., 0] * 255).astype(np.uint8)) predicted_wavelength = predict_with_voting([unet_model, dense_model, cnn_model], test_image_path, min_wavelength, wavelength_range) print(f"真实波长: {true_wavelength:.6f} 纳米") print(f"预测波长: {predicted_wavelength:.6f} 纳米") print(f"绝对误差: {abs(predicted_wavelength - true_wavelength):.8f} 纳米") print(f"相对误差: {abs(predicted_wavelength - true_wavelength) / wavelength_range * 100:.4f}%") else: print("错误:测试批次中没有样本") except Exception as e: print(f"测试失败: {str(e)}") traceback.print_exc() print("\n您可以使用自己的图片进行测试:") model = tf.keras.models.load_model(MODEL_SAVE_PATH) image_path = input("请输入您要测试的图片路径(例如:'test_image.tiff'): ") predicted = predict_with_voting([unet_model, dense_model, cnn_model], image_path, min_wavelength, wavelength_range) print(f"预测波长: {predicted:.6f} 纳米") print("\n程序执行完成。") if __name__ == "__main__": os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' try: from skimage import filters, measure, morphology except ImportError: print("安装必要的库...") import subprocess subprocess.run([sys.executable, "-m", "pip", "install", "scikit-image", "imageio"]) from skimage import filters, measure, morphology main()

最新推荐

recommend-type

langchain4j-core-0.36.0.jar中文文档.zip

1、压缩文件中包含: 中文文档、jar包下载地址、Maven依赖、Gradle依赖、源代码下载地址。 2、使用方法: 解压最外层zip,再解压其中的zip包,双击 【index.html】 文件,即可用浏览器打开、进行查看。 3、特殊说明: (1)本文档为人性化翻译,精心制作,请放心使用; (2)只翻译了该翻译的内容,如:注释、说明、描述、用法讲解 等; (3)不该翻译的内容保持原样,如:类名、方法名、包名、类型、关键字、代码 等。 4、温馨提示: (1)为了防止解压后路径太长导致浏览器无法打开,推荐在解压时选择“解压到当前文件夹”(放心,自带文件夹,文件不会散落一地); (2)有时,一套Java组件会有多个jar,所以在下载前,请仔细阅读本篇描述,以确保这就是你需要的文件。 5、本文件关键字: jar中文文档.zip,java,jar包,Maven,第三方jar包,组件,开源组件,第三方组件,Gradle,中文API文档,手册,开发手册,使用手册,参考手册。
recommend-type

【网络会计】网络会计特点分析.docx

【网络会计】网络会计特点分析.docx
recommend-type

《C语言》教案.doc

《C语言》教案.doc
recommend-type

《Spring-in-china》Seasons-PPT【品质课件PPT】.pptx

《Spring-in-china》Seasons-PPT【品质课件PPT】.pptx
recommend-type

C++实现的DecompressLibrary库解压缩GZ文件

根据提供的文件信息,我们可以深入探讨C++语言中关于解压缩库(Decompress Library)的使用,特别是针对.gz文件格式的解压过程。这里的“lib”通常指的是库(Library),是软件开发中用于提供特定功能的代码集合。在本例中,我们关注的库是用于处理.gz文件压缩包的解压库。 首先,我们要明确一个概念:.gz文件是一种基于GNU zip压缩算法的压缩文件格式,广泛用于Unix、Linux等操作系统上,对文件进行压缩以节省存储空间或网络传输时间。要解压.gz文件,开发者需要使用到支持gzip格式的解压缩库。 在C++中,处理.gz文件通常依赖于第三方库,如zlib或者Boost.IoStreams。codeproject.com是一个提供编程资源和示例代码的网站,程序员可以在该网站上找到现成的C++解压lib代码,来实现.gz文件的解压功能。 解压库(Decompress Library)提供的主要功能是读取.gz文件,执行解压缩算法,并将解压缩后的数据写入到指定的输出位置。在使用这些库时,我们通常需要链接相应的库文件,这样编译器在编译程序时能够找到并使用这些库中定义好的函数和类。 下面是使用C++解压.gz文件时,可能涉及的关键知识点: 1. Zlib库 - zlib是一个用于数据压缩的软件库,提供了许多用于压缩和解压缩数据的函数。 - zlib库支持.gz文件格式,并且在多数Linux发行版中都预装了zlib库。 - 在C++中使用zlib库,需要包含zlib.h头文件,同时链接z库文件。 2. Boost.IoStreams - Boost是一个提供大量可复用C++库的组织,其中的Boost.IoStreams库提供了对.gz文件的压缩和解压缩支持。 - Boost库的使用需要下载Boost源码包,配置好编译环境,并在编译时链接相应的Boost库。 3. C++ I/O操作 - 解压.gz文件需要使用C++的I/O流操作,比如使用ifstream读取.gz文件,使用ofstream输出解压后的文件。 - 对于流操作,我们常用的是std::ifstream和std::ofstream类。 4. 错误处理 - 解压缩过程中可能会遇到各种问题,如文件损坏、磁盘空间不足等,因此进行适当的错误处理是必不可少的。 - 正确地捕获异常,并提供清晰的错误信息,对于调试和用户反馈都非常重要。 5. 代码示例 - 从codeproject找到的C++解压lib很可能包含一个或多个源代码文件,这些文件会包含解压.gz文件所需的函数或类。 - 示例代码可能会展示如何初始化库、如何打开.gz文件、如何读取并处理压缩数据,以及如何释放资源等。 6. 库文件的链接 - 编译使用解压库的程序时,需要指定链接到的库文件,这在不同的编译器和操作系统中可能略有不同。 - 通常,在编译命令中加入-l参数,比如使用g++的话可能是`g++ -o DecompressLibrary DecompressLibrary.cpp -lz`,其中`-lz`表示链接zlib库。 7. 平台兼容性 - 在不同平台上使用解压库可能需要考虑平台兼容性问题。 - Windows系统可能需要额外的配置和库文件,因为zlib或其他库可能不是默认预装的。 根据以上知识点,我们可以得出,在C++中解压.gz文件主要涉及到对zlib或类似库的使用,以及熟悉C++的I/O操作。正确使用这些库,能够有效地对压缩文件进行解压,并处理可能出现的错误情况。如果从codeproject获取到的C++解压lib确实是针对.gz文件格式的,那么它很可能已经封装好了大部分的操作细节,让开发者能够以更简单的方式实现解压功能。
recommend-type

【数据融合技术】:甘肃土壤类型空间分析中的专业性应用

# 摘要 数据融合技术作为一种集成多源数据信息的方法,在土壤类型空间分析中发挥着关键作用。本文介绍了数据融合技术的基本概念及其理论基础,阐述了数据预处理、同步整合及冲突解决等关键技术,并详细描述了甘肃土壤类型数据准备的流程,包括数据采集、质量评估、空间化处理及融合实践准备。通过具体案例分析,展示了数据融合在土壤类型空间分布分析、土壤质量评估及土壤保护规划中的应用。同时,文
recommend-type

VM ware如何查看软件版本信息

<think>首先,用户的问题是:“我想查找VMware软件版本信息的查看方法请问如何在VMware中查看软件版本信息”。用户参考了之前的引用,但那些引用主要是关于查看VMware版本的一般方法,现在用户具体想知道在VMware软件中如何查看版本信息。回顾之前的对话:-用户第一次问的是VSCode的版本查看方法。-现在用户问的是VMware的版本查看方法。-系统级指令要求使用中文回答,正确格式化数学表达式(如果需要),但这里可能不需要数学表达式。-指令还要求生成相关问题,并在回答中引用段落时添加引用标识。用户提供的引用[1]到[5]是关于VMware版本的查看方法、下载等,但用户特别强调“参考
recommend-type

数据库课程设计报告:常用数据库综述

数据库是现代信息管理的基础,其技术广泛应用于各个领域。在高等教育中,数据库课程设计是一个重要环节,它不仅是学习理论知识的实践,也是培养学生综合运用数据库技术解决问题能力的平台。本知识点将围绕“经典数据库课程设计报告”展开,详细阐述数据库的基本概念、课程设计的目的和内容,以及在设计报告中常用的数据库技术。 ### 1. 数据库基本概念 #### 1.1 数据库定义 数据库(Database)是存储在计算机存储设备中的数据集合,这些数据集合是经过组织的、可共享的,并且可以被多个应用程序或用户共享访问。数据库管理系统(DBMS)提供了数据的定义、创建、维护和控制功能。 #### 1.2 数据库类型 数据库按照数据模型可以分为关系型数据库(如MySQL、Oracle)、层次型数据库、网状型数据库、面向对象型数据库等。其中,关系型数据库因其简单性和强大的操作能力而广泛使用。 #### 1.3 数据库特性 数据库具备安全性、完整性、一致性和可靠性等重要特性。安全性指的是防止数据被未授权访问和破坏。完整性指的是数据和数据库的结构必须符合既定规则。一致性保证了事务的执行使数据库从一个一致性状态转换到另一个一致性状态。可靠性则保证了系统发生故障时数据不会丢失。 ### 2. 课程设计目的 #### 2.1 理论与实践结合 数据库课程设计旨在将学生在课堂上学习的数据库理论知识与实际操作相结合,通过完成具体的数据库设计任务,加深对数据库知识的理解。 #### 2.2 培养实践能力 通过课程设计,学生能够提升分析问题、设计解决方案以及使用数据库技术实现这些方案的能力。这包括需求分析、概念设计、逻辑设计、物理设计、数据库实现、测试和维护等整个数据库开发周期。 ### 3. 课程设计内容 #### 3.1 需求分析 在设计报告的开始,需要对项目的目标和需求进行深入分析。这涉及到确定数据存储需求、数据处理需求、数据安全和隐私保护要求等。 #### 3.2 概念设计 概念设计阶段要制定出数据库的E-R模型(实体-关系模型),明确实体之间的关系。E-R模型的目的是确定数据库结构并形成数据库的全局视图。 #### 3.3 逻辑设计 基于概念设计,逻辑设计阶段将E-R模型转换成特定数据库系统的逻辑结构,通常是关系型数据库的表结构。在此阶段,设计者需要确定各个表的属性、数据类型、主键、外键以及索引等。 #### 3.4 物理设计 在物理设计阶段,针对特定的数据库系统,设计者需确定数据的存储方式、索引的具体实现方法、存储过程、触发器等数据库对象的创建。 #### 3.5 数据库实现 根据物理设计,实际创建数据库、表、视图、索引、触发器和存储过程等。同时,还需要编写用于数据录入、查询、更新和删除的SQL语句。 #### 3.6 测试与维护 设计完成之后,需要对数据库进行测试,确保其满足需求分析阶段确定的各项要求。测试过程包括单元测试、集成测试和系统测试。测试无误后,数据库还需要进行持续的维护和优化。 ### 4. 常用数据库技术 #### 4.1 SQL语言 SQL(结构化查询语言)是数据库管理的国际标准语言。它包括数据查询、数据操作、数据定义和数据控制四大功能。SQL语言是数据库课程设计中必备的技能。 #### 4.2 数据库设计工具 常用的数据库设计工具包括ER/Studio、Microsoft Visio、MySQL Workbench等。这些工具可以帮助设计者可视化地设计数据库结构,提高设计效率和准确性。 #### 4.3 数据库管理系统 数据库管理系统(DBMS)是用于创建和管理数据库的软件。关系型数据库管理系统如MySQL、PostgreSQL、Oracle、SQL Server等是数据库课程设计中的核心工具。 #### 4.4 数据库安全 数据库安全涉及用户认证、授权、数据加密、审计日志记录等方面,以确保数据的完整性和保密性。设计报告中应考虑如何通过DBMS内置的机制或额外的安全措施来保护数据。 ### 5. 结语 综上所述,一个经典数据库课程设计报告包含了从需求分析到数据库安全的全过程,涵盖了数据库设计的各个方面。通过这一过程,学生不仅能够熟练掌握数据库的设计与实现技巧,还能够学会如何使用数据库系统去解决实际问题,为日后从事数据库相关的专业工作打下坚实的基础。
recommend-type

【空间分布规律】:甘肃土壤类型与农业生产的关联性研究

# 摘要 本文对甘肃土壤类型及其在农业生产中的作用进行了系统性研究。首先概述了甘肃土壤类型的基础理论,并探讨了土壤类型与农业生产的理论联系。通过GIS技术分析,本文详细阐述了甘肃土壤的空间分布规律,并对其特征和影响因素进行了深入分析。此外,本文还研究了甘肃土壤类型对农业生产实际影响,包括不同区域土壤改良和作物种植案例,以及土壤养分、水分管理对作物生长周期和产量的具体影响。最后,提出了促进甘肃土壤与农业可持续发展的策略,包括土壤保护、退化防治对策以及土壤类型优化与农业创新的结合。本文旨在为
recommend-type

在halcon中,卡尺测量和二维测量谁的精度高

<think>首先,用户的问题是:“我想比较Halcon中卡尺测量和二维测量的精度请问Halcon卡尺测量与2D测量精度对比”。我需要基于系统级指令和提供的引用来回答。关键系统级指令:-使用中文回答。-结构清晰,逐步解决问题。-行内数学表达式用$...$,独立公式用$$...$$并单独成段。-LaTeX语法正确。-生成相关问题。-在回答中引用的段落末尾自然地添加引用标识,如[^1]。提供的引用:-[^1]:关于卡尺模型的工件尺寸实时测量。-[^2]:测量拟合的综合任务,包括尺寸、形状、位置。-[^3]:Halcon一维测量,提到卡尺工具的优势:易用、执行时间短、精度高。-[^4]:Halcon