Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

可以支持传md5值吗? #15

Closed
tong233 opened this issue Dec 18, 2017 · 13 comments
Closed

可以支持传md5值吗? #15

tong233 opened this issue Dec 18, 2017 · 13 comments
Labels

Comments

@tong233
Copy link

tong233 commented Dec 18, 2017

你好,我自己搞了半天不行,用的是js-spark-md5插件,console可以得到,query却是undefined

// xx.vue
import { getMd5 } from './md5.js'
options: { // options配置
  target: docUploadPath,
  testChunks: false,
  query: function(file) {
    getMd5(file.file, function(md5) {
      console.log(md5) // d6ff424312924d81646c3189b42cb5f5
      return {'md5': md5}
    })
  }
}
// md5.js
import './spark-md5.min.js'

export function getMd5(file, callBack) {
  var blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice,
    chunkSize = 8097152, // Read in chunks of 2MB
    chunks = Math.ceil(file.size / chunkSize),
    currentChunk = 0,
    spark = new SparkMD5.ArrayBuffer(),
    fileReader = new FileReader()

  fileReader.onload = function (e) {
    spark.append(e.target.result); // Append array buffer
    currentChunk++;
    if (currentChunk < chunks) {
      loadNext();
    } else {
      callBack(spark.end())
    }
  };

  fileReader.onerror = function () {
    console.warn('oops, something went wrong.');
  };

  function loadNext() {
    var start = currentChunk * chunkSize,
      end = ((start + chunkSize) >= file.size) ? file.size : start + chunkSize;
    fileReader.readAsArrayBuffer(blobSlice.call(file, start, end));
  }
  loadNext();
}
@dolymood
Copy link
Member

query 只支持 同步的 不支持异步的

@noahlann
Copy link

noahlann commented May 30, 2018

你可以在 preprocess 中获取

preprocess (chunk) {
        // 上传或test之前执行,生成md5(如果file存在md5 就不生成了)
        if (chunk.file.md5 === '' || chunk.file.md5 == null) {
          fileMd5HeadTailTime(chunk.file, this.uploader.opts.chunkSize).then(() => {
            chunk.preprocessFinished()
          })
        } else {
          chunk.preprocessFinished()
        }
}

@xyhxyw
Copy link

xyhxyw commented Jul 5, 2018

@crazy6995 ,你这个fileMd5HeadTailTime是什么?

@noahlann
Copy link

noahlann commented Jul 5, 2018

@xyhxyw 这个是取文件 第一个分片+最后一个分片+最后修改时间 生成的md5 是可以正常传给后端的。

@xuchao321
Copy link

你好 这个异步 是如何解决的 @

@noahlann
Copy link

noahlann commented Jul 5, 2018

@xuchao321
你可以看一下 只有在调用了preprocessFinished 方法后 chunk的上传才会继续走下去 否则会一直等待

@xuchao321
Copy link

@crazy6995 你好 我可以看一下options 怎么写的吗 preprocessFinished

@noahlann
Copy link

noahlann commented Jul 6, 2018

@xuchao321

options: {
    preprocess: this.preprocess
}
preprocess (chunk) {
        // 上传或test之前执行,生成md5(如果file存在md5 就不生成了)
        if (chunk.file.md5 === '' || chunk.file.md5 == null) {
          fileMd5HeadTailTime(chunk.file, this.uploader.opts.chunkSize).then(() => {
            chunk.preprocessFinished()
          })
        } else {
          chunk.preprocessFinished()
        }
      }

其中fileMd5HeadTailTime

function fileMd5HeadTailTime (zenFile, chunkSize) {
  return new Promise((resolve, reject) => {
    let file = zenFile.file
    let SparkMD5 = require('spark-md5')
    let spark = new SparkMD5.ArrayBuffer()
    let fileReader = new FileReader()
    let blobSlice =
      File.prototype.slice ||
      File.prototype.mozSlice ||
      File.prototype.webkitSlice
    let chunks = Math.ceil(file.size / chunkSize)
    let currentChunk = 0

    fileReader.onload = e => {
      spark.append(e.target.result)
      if (currentChunk === chunks - 1) {
        // 所有 首尾chunks 完毕 追加lastModifier
        let time = new Int8Array(longToByteArray(file.lastModified))
        spark.append(time)
        // console.info('computed hash', spark.end()) // Compute hash
        zenFile.md5 = spark.end()
        resolve()
      } else {
        currentChunk = chunks - 1 // 第一块读完直接读取最后一块
        if (currentChunk <= 0) {
          // 若只分了一块 直接tm组合时间拼接md5
          let time = new Int8Array(longToByteArray(file.lastModified))
          spark.append(time)
          zenFile.md5 = spark.end()
          resolve()
        } else {
          load()
        }
      }
      fileReader.onerror = e => reject(e)
    }

    let load = () => {
      var start = currentChunk * chunkSize
      var end = start + chunkSize >= file.size ? file.size : start + chunkSize
      fileReader.readAsArrayBuffer(blobSlice.call(file, start, end))
    }

    load()
  })

@xuchao321
Copy link

@crazy6995 谢谢 弄好了!!!

@nirvanaspy
Copy link

@crazy6995 simultaneousUploads大于1的时候,fileMd5HeadTailTime会被执行多次,请问有什么解决方法吗?

@adong6053
Copy link

可以再query中传入每一片文件的MD5吗

@adong6053
Copy link

@crazy6995 @xuchao321

@noahlann
Copy link

noahlann commented Nov 24, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants