vue-simple-uploader+springboot实现分片上传 进度条
时间: 2025-02-24 19:27:30 浏览: 72
### 实现带有进度条的分片上传
为了实现带有进度条的分片上传功能,需先配置 `vue-simple-uploader` 并集成到 Spring Boot 后端服务中。
#### 前端设置
在项目的入口文件 `main.js` 中引入并注册插件:
```javascript
import uploader from 'vue-simple-uploader'
import 'vue-simple-uploader/dist/style.css'
app.use(uploader)
```
创建用于展示上传组件和进度条的页面模板,在此示例中假设使用的是单页应用框架下的某个视图组件:
```html
<template>
<div class="uploader-container">
<!-- 文件选择器 -->
<file-upload
ref="upload"
v-bind="$attrs"
:post-action="`${baseUrl}/api/chunkedUpload`"
:multiple="true"
:chunking="true"
:simultaneous-uploads="4"
:single-chunk-limit="null"
@input-filter="inputFilter"
@progress="onProgress"
>
Select Files
</file-upload>
<!-- 进度显示区 -->
<ul id="files" style="list-style-type:none;">
<li v-for="(item, index) in uploadList" :key="index">{{ item.name }} - {{ Math.round(item.progress) }}%</li>
</ul>
</div>
</template>
<script>
export default {
data() {
return {
baseUrl: "http://localhost:8080",
uploadList: []
}
},
methods: {
onProgress(event, file){
let foundItem = this.uploadList.find((el)=> el.uid === file.uid);
if(foundItem !== undefined && event.lengthComputable){
foundItem.progress = (event.loaded / event.total)*100;
// 更新列表中的对象属性
Object.assign(this.$options.data());
}
},
inputFilter(newFile, oldFile, prevent) {
if (newFile && !oldFile) {
// 添加新文件时的操作
newFile.chunkSize = 2 * 1024 * 1024; // 设置每一片大小为2MB
const newItem = {name:newFile.name, uid:newFile.uid};
this.uploadList.push(newItem);
// 防止默认行为
setTimeout(() => {}, 0);
return true;
}
if (newFile && oldFile) {
if (newFile.file != oldFile.file || newFile.error != oldFile.error) {
// 当文件发生变化或发生错误时更新UI逻辑...
// 如果有需要可在此处添加额外处理
}
}
}
}
}
</script>
```
上述代码片段实现了前端部分的功能需求[^3]。其中特别注意到了几个关键点:启用了分片特性 (`chunking`);设置了并发数(`simultaneous-uploads`)以及每个切片的最大尺寸(`single-chunk-limit`);定义了自定义过滤方法来控制上传细节,并监听进度变化以便实时反馈给用户。
#### 后端Spring Boot服务器端接收
对于后端而言,则要确保能够正确解析来自客户端发送过来的数据包。这里给出一个简单的控制器例子用来说明如何接受这些请求:
```java
@RestController
@RequestMapping("/api")
public class FileController {
private static final Logger logger = LoggerFactory.getLogger(FileController.class);
/**
* 接收分片文件.
*/
@PostMapping(value="/chunkedUpload", consumes=MediaType.MULTIPART_FORM_DATA_VALUE)
public ResponseEntity<String> chunkedUpload(@RequestParam("file") MultipartFile[] files,
HttpServletRequest request) throws IOException{
String fileName = request.getParameter("fileName");
Integer chunksTotal = Integer.parseInt(request.getParameter("chunks"));
Integer currentChunkIndex = Integer.parseInt(request.getParameter("chunk"));
Path tempDirPath = Paths.get(System.getProperty("java.io.tmpdir"), UUID.randomUUID().toString());
try {
for(MultipartFile multipartFile : files){
byte[] bytes = IOUtils.toByteArray(multipartFile.getInputStream());
Path filePath = Paths.get(tempDirPath.toString(), fileName + "_" + currentChunkIndex);
Files.createDirectories(filePath.getParent());
Files.write(filePath, bytes);
}
if(chunksTotal.equals(currentChunkIndex)){
// 所有的分片都已收到,现在可以将它们组合成完整的文件
combineChunksIntoSingleFile(fileName, tempDirPath.toFile(), chunksTotal);
FileUtils.deleteDirectory(tempDirPath.toFile()); // 清理临时目录
}
return ResponseEntity.ok("{\"status\":\"success\"}");
} catch(Exception e){
logger.error(e.getMessage(),e);
throw new RuntimeException("Failed to process uploaded parts.", e);
}
}
private void combineChunksIntoSingleFile(String originalFileName, File directoryContainingParts, int totalNumberOfParts)throws Exception{
List<FileInputStream> streams = IntStream.rangeClosed(1,totalNumberOfParts).mapToObj(i->Paths.get(directoryContainingParts.getAbsolutePath(),
originalFileName+"_"+i)).map(p->{try{return new FileInputStream(p);}catch(IOException ex){throw new UncheckedIOException(ex);}}).collect(Collectors.toList());
SequenceInputStream sequenceInputStream = new SequenceInputStream(Collections.enumeration(streams));
FileOutputStream outputStream = null;
try(outputStream=new FileOutputStream(new File(directoryContainingParts,"final_" +originalFileName))){
IOUtils.copy(sequenceInputStream,outputStream);
}finally{
closeStreams(streams.toArray(new Closeable[streams.size()]));
}
}
private void closeStreams(Closeable... streamArray){
Arrays.stream(streamArray).forEach(s -> {
try{s.close();}catch(Throwable t){}
});
}
}
```
这段Java代码展示了怎样构建RESTful API 来支持分片上传。它会检查当前接收到的是不是最后一块数据,如果是的话就会尝试把所有的碎片拼凑起来形成最终版本的目标文件[^5]。
阅读全文
相关推荐


















