-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
可以支持传md5值吗? #15
Comments
query 只支持 同步的 不支持异步的 |
你可以在 preprocess 中获取 preprocess (chunk) {
// 上传或test之前执行,生成md5(如果file存在md5 就不生成了)
if (chunk.file.md5 === '' || chunk.file.md5 == null) {
fileMd5HeadTailTime(chunk.file, this.uploader.opts.chunkSize).then(() => {
chunk.preprocessFinished()
})
} else {
chunk.preprocessFinished()
}
} |
@crazy6995 ,你这个fileMd5HeadTailTime是什么? |
@xyhxyw 这个是取文件 第一个分片+最后一个分片+最后修改时间 生成的md5 是可以正常传给后端的。 |
你好 这个异步 是如何解决的 @ |
@xuchao321 |
@crazy6995 你好 我可以看一下options 怎么写的吗 preprocessFinished |
options: {
preprocess: this.preprocess
} preprocess (chunk) {
// 上传或test之前执行,生成md5(如果file存在md5 就不生成了)
if (chunk.file.md5 === '' || chunk.file.md5 == null) {
fileMd5HeadTailTime(chunk.file, this.uploader.opts.chunkSize).then(() => {
chunk.preprocessFinished()
})
} else {
chunk.preprocessFinished()
}
} 其中fileMd5HeadTailTime function fileMd5HeadTailTime (zenFile, chunkSize) {
return new Promise((resolve, reject) => {
let file = zenFile.file
let SparkMD5 = require('spark-md5')
let spark = new SparkMD5.ArrayBuffer()
let fileReader = new FileReader()
let blobSlice =
File.prototype.slice ||
File.prototype.mozSlice ||
File.prototype.webkitSlice
let chunks = Math.ceil(file.size / chunkSize)
let currentChunk = 0
fileReader.onload = e => {
spark.append(e.target.result)
if (currentChunk === chunks - 1) {
// 所有 首尾chunks 完毕 追加lastModifier
let time = new Int8Array(longToByteArray(file.lastModified))
spark.append(time)
// console.info('computed hash', spark.end()) // Compute hash
zenFile.md5 = spark.end()
resolve()
} else {
currentChunk = chunks - 1 // 第一块读完直接读取最后一块
if (currentChunk <= 0) {
// 若只分了一块 直接tm组合时间拼接md5
let time = new Int8Array(longToByteArray(file.lastModified))
spark.append(time)
zenFile.md5 = spark.end()
resolve()
} else {
load()
}
}
fileReader.onerror = e => reject(e)
}
let load = () => {
var start = currentChunk * chunkSize
var end = start + chunkSize >= file.size ? file.size : start + chunkSize
fileReader.readAsArrayBuffer(blobSlice.call(file, start, end))
}
load()
}) |
@crazy6995 谢谢 弄好了!!! |
@crazy6995 simultaneousUploads大于1的时候,fileMd5HeadTailTime会被执行多次,请问有什么解决方法吗? |
可以再query中传入每一片文件的MD5吗 |
@crazy6995 @xuchao321 |
每一片的目前不能支持,只能是整个文件。
在 2022-11-17 10:40:41,"adong6053" ***@***.***> 写道:
可以再query中传入每一片文件的MD5吗
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
你好,我自己搞了半天不行,用的是js-spark-md5插件,console可以得到,query却是undefined
The text was updated successfully, but these errors were encountered: