一、个人简介
💖💖作者:计算机编程果茶熊
💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我!
💛💛想说的话:感谢大家的关注与支持!
💜💜
网站实战项目
安卓/小程序实战项目
💕💕文末获取源码联系计算机编程果茶熊
二、系统介绍
大数据框架:Hadoop+Spark(Hive需要定制修改)
开发语言:Java+Python(两个版本都支持)
数据库:MySQL
后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持)
前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery
《北京气象站数据可视化分析系统》是一款基于大数据技术的气象数据分析平台,采用Hadoop和Spark作为核心大数据处理框架,结合Python和Java双语言支持,实现了气象数据的高效存储、处理与可视化。系统后端采用Django和Spring Boot框架,前端基于Vue+ElementUI+Echarts技术栈打造直观友好的用户界面。通过整合Hadoop HDFS分布式存储、Spark计算引擎、Spark SQL数据处理以及Pandas和NumPy等数据分析库,系统构建了完整的气象数据分析生态。功能模块包括系统首页、用户中心、用户管理、天气数据管理、气象时间序列分析、极端气象事件分析、气象空间分布分析、气象多维综合分析和系统公告管理等九大核心模块,为气象数据提供全方位的存储、管理、分析和可视化能力。系统通过MySQL数据库实现数据持久化存储,确保数据安全可靠。该系统不仅能够帮助气象工作者高效处理和分析大规模气象数据,还能通过直观的可视化展示,为气象预测、气候变化研究和极端天气事件分析提供有力支持,是一个集数据管理、分析处理和可视化于一体的综合性气象数据平台。
三、北京气象站数据可视化分析系统-视频解说
不懂大数据如何做毕设?《北京气象站数据可视化分析系统》手把手教学
四、北京气象站数据可视化分析系统-功能展示
五、北京气象站数据可视化分析系统-代码展示
# 功能1: 气象时间序列分析核心处理函数
def analyze_weather_time_series(station_id, start_date, end_date, metrics, interval='daily'):
"""分析指定气象站点的时间序列数据"""
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, max, min, stddev, date_format
import pandas as pd
import numpy as np
from statsmodels.tsa.seasonal import seasonal_decompose
# 初始化Spark会话
spark = SparkSession.builder.appName("WeatherTimeSeriesAnalysis").getOrCreate()
# 从HDFS加载气象数据
weather_data = spark.read.parquet(f"/data/weather/station/{station_id}")
# 筛选日期范围内的数据
filtered_data = weather_data.filter(
(col("date") >= start_date) & (col("date") <= end_date)
)
# 根据时间间隔进行数据聚合
if interval == 'daily':
grouped_data = filtered_data
elif interval == 'weekly':
grouped_data = filtered_data.withColumn(
"week", date_format(col("date"), "yyyy-ww")
).groupBy("week").agg(
*[avg(m).alias(f"{m}_avg") for m in metrics],
*[max(m).alias(f"{m}_max") for m in metrics],
*[min(m).alias(f"{m}_min") for m in metrics]
)
elif interval == 'monthly':
grouped_data = filtered_data.withColumn(
"month", date_format(col("date"), "yyyy-MM")
).groupBy("month").agg(
*[avg(m).alias(f"{m}_avg") for m in metrics],
*[max(m).alias(f"{m}_max") for m in metrics],
*[min(m).alias(f"{m}_min") for m in metrics]
)
# 转换为Pandas DataFrame进行高级时间序列分析
pandas_df = grouped_data.toPandas()
# 对每个指标进行时间序列分解(趋势、季节性、残差)
results = {}
for metric in metrics:
if len(pandas_df) > 2: # 确保数据足够进行分解
try:
# 设置时间索引
ts_data = pandas_df.set_index('date')[metric] if interval == 'daily' else pandas_df[f"{metric}_avg"]
# 时间序列分解
decomposition = seasonal_decompose(ts_data, model='additive', period=30 if interval == 'daily' else 12)
# 计算异常值(超过3个标准差)
residuals = decomposition.resid.dropna()
threshold = 3 * np.std(residuals)
anomalies = residuals[abs(residuals) > threshold]
# 存储结果
results[metric] = {
'trend': decomposition.trend.dropna().to_dict(),
'seasonal': decomposition.seasonal.dropna().to_dict(),
'anomalies': anomalies.to_dict(),
'statistics': {
'mean': float(ts_data.mean()),
'std': float(ts_data.std()),
'min': float(ts_data.min()),
'max': float(ts_data.max())
}
}
except Exception as e:
results[metric] = {'error': str(e)}
# 关闭Spark会话
spark.stop()
return results
# 功能2: 极端气象事件分析核心处理函数
def analyze_extreme_weather_events(region_id, event_type, threshold, time_range):
"""分析指定区域的极端气象事件"""
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, count, avg, max, min, stddev, percentile_approx, expr
from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import VectorAssembler
import numpy as np
from scipy.spatial.distance import cdist
# 初始化Spark会话
spark = SparkSession.builder.appName("ExtremeWeatherAnalysis").getOrCreate()
# 从HDFS加载区域气象数据
region_data = spark.read.parquet(f"/data/weather/region/{region_id}")
# 根据事件类型选择相关指标
event_metrics = {
'heatwave': ['temperature', 'humidity'],
'coldwave': ['temperature', 'wind_speed'],
'rainstorm': ['precipitation', 'wind_speed', 'pressure'],
'drought': ['precipitation', 'temperature', 'soil_moisture'],
'snowstorm': ['precipitation', 'temperature', 'wind_speed']
}
selected_metrics = event_metrics.get(event_type, ['temperature', 'precipitation', 'wind_speed'])
# 筛选时间范围内的数据
start_date, end_date = time_range
filtered_data = region_data.filter(
(col("date") >= start_date) & (col("date") <= end_date)
)
# 根据事件类型和阈值筛选极端事件
if event_type == 'heatwave':
extreme_events = filtered_data.filter(col("temperature") > threshold)
elif event_type == 'coldwave':
extreme_events = filtered_data.filter(col("temperature") < threshold)
elif event_type == 'rainstorm':
extreme_events = filtered_data.filter(col("precipitation") > threshold)
elif event_type == 'drought':
# 连续低于阈值的降水天数
extreme_events = filtered_data.filter(col("precipitation") < threshold)
else:
# 默认使用温度作为筛选条件
extreme_events = filtered_data.filter(
(col("temperature") > threshold) | (col("temperature") < -threshold)
)
# 使用机器学习聚类分析极端事件
if extreme_events.count() > 0:
# 准备特征向量
assembler = VectorAssembler(inputCols=selected_metrics, outputCol="features")
feature_data = assembler.transform(extreme_events)
# 确定最佳聚类数(尝试2-5个聚类)
best_k = 2
min_dist = float('inf')
for k in range(2, 6):
kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("features")
model = kmeans.fit(feature_data)
# 计算轮廓系数
centers = model.clusterCenters()
predictions = model.transform(feature_data)
# 将Spark DataFrame转换为Pandas进行计算
pd_data = predictions.select("features", "prediction").toPandas()
features = np.array([row.toArray() for row in pd_data["features"]])
clusters = pd_data["prediction"].values
# 计算每个点到其分配的聚类中心的距离
distances = np.zeros(len(features))
for i in range(len(features)):
cluster_id = clusters[i]
distances[i] = np.linalg.norm(features[i] - centers[cluster_id])
avg_dist = np.mean(distances)
if avg_dist < min_dist:
min_dist = avg_dist
best_k = k
# 使用最佳聚类数进行聚类
kmeans = KMeans().setK(best_k).setSeed(1).setFeaturesCol("features")
model = kmeans.fit(feature_data)
clustered_data = model.transform(feature_data)
# 分析每个聚类的特征
cluster_analysis = clustered_data.groupBy("prediction").agg(
count("*").alias("event_count"),
*[avg(m).alias(f"{m}_avg") for m in selected_metrics],
*[max(m).alias(f"{m}_max") for m in selected_metrics],
*[min(m).alias(f"{m}_min") for m in selected_metrics],
*[stddev(m).alias(f"{m}_std") for m in selected_metrics]
)
# 计算事件持续时间和影响范围
event_duration = extreme_events.groupBy("station_id").agg(
count("*").alias("duration_days")
).agg(
avg("duration_days").alias("avg_duration"),
max("duration_days").alias("max_duration")
)
# 计算事件严重程度(根据偏离正常值的程度)
severity = extreme_events.withColumn(
"severity_score",
expr(f"abs({selected_metrics[0]} - (select avg({selected_metrics[0]}) from {region_data.tableName}))")
).agg(
avg("severity_score").alias("avg_severity"),
max("severity_score").alias("max_severity")
)
# 整合分析结果
result = {
"event_type": event_type,
"total_events": extreme_events.count(),
"clusters": cluster_analysis.toPandas().to_dict('records'),
"duration": event_duration.toPandas().to_dict('records')[0],
"severity": severity.toPandas().to_dict('records')[0]
}
else:
result = {
"event_type": event_type,
"total_events": 0,
"message": "No extreme events found for the given criteria"
}
# 关闭Spark会话
spark.stop()
return result
# 功能3: 气象空间分布分析核心处理函数
def analyze_weather_spatial_distribution(date, metric, region_type='city', interpolation_method='idw'):
"""分析指定日期的气象要素空间分布"""
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, udf
from pyspark.sql.types import FloatType, StructType, StructField, StringType
import numpy as np
from scipy.interpolate import griddata
from scipy.spatial import Delaunay
import geopandas as gpd
from shapely.geometry import Point
# 初始化Spark会话
spark = SparkSession.builder.appName("WeatherSpatialAnalysis").getOrCreate()
# 从HDFS加载气象站点数据和位置信息
weather_data = spark.read.parquet(f"/data/weather/daily/{date}")
station_locations = spark.read.parquet("/data/stations/locations")
# 合并气象数据和站点位置信息
joined_data = weather_data.join(station_locations, "station_id")
# 筛选有效的气象数据
valid_data = joined_data.filter(col(metric).isNotNull())
# 将数据转换为Pandas DataFrame用于空间分析
pandas_df = valid_data.select("station_id", "latitude", "longitude", metric).toPandas()
# 创建GeoDataFrame进行空间分析
geometry = [Point(xy) for xy in zip(pandas_df.longitude, pandas_df.latitude)]
gdf = gpd.GeoDataFrame(pandas_df, geometry=geometry)
# 加载区域边界数据
if region_type == 'city':
boundaries = gpd.read_file("/data/boundaries/city_boundaries.geojson")
elif region_type == 'province':
boundaries = gpd.read_file("/data/boundaries/province_boundaries.geojson")
else: # 默认为国家级
boundaries = gpd.read_file("/data/boundaries/country_boundary.geojson")
# 空间连接以确定每个站点所属区域
spatial_join = gpd.sjoin(gdf, boundaries, how="left", op="within")
# 创建插值网格
min_lon, min_lat, max_lon, max_lat = boundaries.total_bounds
grid_size = 0.01 # 约1km的网格分辨率
grid_lon = np.arange(min_lon, max_lon, grid_size)
grid_lat = np.arange(min_lat, max_lat, grid_size)
grid_lon_mesh, grid_lat_mesh = np.meshgrid(grid_lon, grid_lat)
# 准备插值的输入数据
points = pandas_df[['longitude', 'latitude']].values
values = pandas_df[metric].values
# 根据选择的方法进行空间插值
if interpolation_method == 'idw':
# 反距离加权插值
def idw_interpolation(x, y, z, xi, yi, power=2):
dist = np.sqrt((xi[:, np.newaxis] - x[np.newaxis, :])**2 +
(yi[:, np.newaxis] - y[np.newaxis, :])**2)
# 避免除以零
dist[dist < 0.0000001] = 0.0000001
weights = 1.0 / (dist ** power)
weights_sum = np.sum(weights, axis=1)
return np.sum(weights * z, axis=1) / weights_sum
grid_z = idw_interpolation(
points[:, 0], points[:, 1], values,
grid_lon_mesh.flatten(), grid_lat_mesh.flatten()
)
grid_z = grid_z.reshape(grid_lon_mesh.shape)
else:
# 默认使用线性插值
grid_z = griddata(
points, values, (grid_lon_mesh, grid_lat_mesh),
method='linear', fill_value=np.nan
)
# 计算每个区域的统计数据
region_stats = spatial_join.groupby('name').agg({
metric: ['count', 'mean', 'std', 'min', 'max']
}).reset_index()
region_stats.columns = ['region_name', 'station_count', f'{metric}_mean',
f'{metric}_std', f'{metric}_min', f'{metric}_max']
# 计算空间自相关
from pysal.explore import esda
from pysal.lib import weights
# 创建空间权重矩阵
w = weights.distance.DistanceBand.from_array(
points, threshold=0.5, binary=False
)
# 计算Moran's I全局空间自相关指数
moran = esda.moran.Moran(values, w)
# 计算局部空间自相关(LISA)
lisa = esda.moran.Moran_Local(values, w)
# 热点分析
hotspot_threshold = np.percentile(values, 90) # 定义热点阈值为前10%
coldspot_threshold = np.percentile(values, 10) # 定义冷点阈值为后10%
hotspots = pandas_df[pandas_df[metric] > hotspot_threshold]
coldspots = pandas_df[pandas_df[metric] < coldspot_threshold]
# 整合分析结果
result = {
"date": date,
"metric": metric,
"global_spatial_autocorrelation": {
"moran_i": float(moran.I),
"p_value": float(moran.p_sim)
},
"interpolation_grid": {
"longitude": grid_lon.tolist(),
"latitude": grid_lat.tolist(),
"values": grid_z.tolist()
},
"region_statistics": region_stats.to_dict('records'),
"hotspots": {
"count": len(hotspots),
"locations": hotspots[['station_id', 'latitude', 'longitude', metric]].to_dict('records')
},
"coldspots": {
"count": len(coldspots),
"locations": coldspots[['station_id', 'latitude', 'longitude', metric]].to_dict('records')
}
}
# 关闭Spark会话
spark.stop()
return result
六、北京气象站数据可视化分析系统-文档展示
七、END
💕💕文末获取源码联系计算机编程果茶熊