dat
is a node module that you can require with e.g. var dat = require('dat')
var db = dat([path], [options], [onReady])
Returns a new dat instance and either opens the existing underlying database or creates a new empty one. All arguments are optional.
path
is the path to the folder inside of which dat should put it's .dat
folder. If a .dat
folder already exists there dat will open the existing dat store, otherwise a new one will be created. If not specified it will use process.cwd()
onReady
gets called with (err)
when dat is ready to be used. If there is an err
then it means something went wrong when start up dat.
init
(defaulttrue
) - iffalse
dat will not create a new empty database when it starts upstorage
(defaulttrue
) - iffalse
dat will not try to read the underlying database when it starts uppath
(defaultprocess.cwd()
) - if not specified as the first argument to the constructor it will checkoptions.path
insteadadminUser
andadminPass
(defaultundefined
) - if both are set any write operations will require HTTP basic auth w/ these credentialsbackend
(defaultrequire('leveldown-prebuilt')
) - pass in a custom leveldown backendblobs
(defaultrequire('lib/blobs.js')
) - pass in a custom blob storereplicator
(defaultrequire('lib/replicator.js')
) - pass in a custom replicatorremoteAddress
(defaultundefined
) - if specified then dat will run in RPC client modemanifest
(defaultundefined
) - ifremoteAddress
is also true this RPC manifest object will be used bymultilevel.client
note: the options
object also gets passed to the levelup
constructor
db.get(key, [options], callback)
Gets a key, calls callback with (error, value)
. value
is a JS object
version
(defaults to latest) - gets row as specific version, e.g.{version: 3}
db.put([key], value, [opts], cb)
Puts value into the database by key. Specify the key you want by setting it as either key
or value.key
, e.g. db.put({key: 'bob'} ... )
. key
is optional to be compatible with the levelup API
cb
will be called with (error, newVersion)
where newVersion
will be be a JS object with key
and version
properties.
If something already exists in the database with the key you specified you may receive a conflict error. To ensure you do not overwrite data accidentally you must pass in the current version of the key you wish to update, e.g. if bob
is in the database at version 1 and you want to update it to add a foo
key: db.put({key: 'bob', version: 1, 'foo': 'bar'})
, which will update the row to version 2.
All versions of all rows are persisted and replicated.
If Buffer.isBuffer(value)
is truthy then it will store whatever binary data is in value
(only use this if you know what you are doing)
force
(defaultfalse
) - if true it will bypass revision checking and override any conflicts that may normally happen + create a new version of the row
db.delete(key, cb)
Marks key
as deleted. Note: does not destroy old versions. Calls cb
with (err, newVersion)
var readStream = db.createReadStream([opts])
Returns a read stream over the most recent version of all rows in the dat store.
Rows are returned in the format {key: key, value: value}
where key is by default a string and value is by default a JS object.
start
(defaults to the beginning of the possible keyspace) - key to start iterating fromend
(defaults to the end of the possible keyspace) - key to stop iterating atlimit
(default unlimited) - how many rows to return before stopping
Note: not all options from levelup.createReadStream
are supported at this time
var valueStream = db.createValueStream([opts])
Returns a value stream over the most recent version of all rows in the dat store.
By default the returned stream is a readable object stream that will emit 1 JS object per row (equivalent to the .value
object returned by createReadStream
). This differs slightly from levelup where the value stream is not an object stream by default.
You can also pass in options to serialize the values as either CSV or line-delimited JSON (see below).
start
(defaults to the beginning of the possible keyspace) - key to start iterating fromend
(defaults to the end of the possible keyspace) - key to stop iterating atlimit
(default unlimited) - how many rows to return before stoppingformat
(defaultobjectMode
) - if set tocsv
orjson
the stream will not be an object mode stream and will emit serialized datacsv
(defaultfalse
) - if true is equivalent to settingformat
tocsv
json
(defaultfalse
) - if true is equivalent to settingformat
tojson
var keyStream = db.createKeyStream([opts])
Returns a key stream over the most recent version of all keys in the dat store.
By default the returned stream is a readable object stream that will emit 1 JS object per row in the form {key: key, version: number, deleted: boolean}
. This differs slightly from levelup where the value stream is not an object stream by default. Dat stores the key, version and deleted status in the key on disk which is why all 3 properties are returned by this stream.
start
(defaults to the beginning of the possible keyspace) - key to start iterating fromend
(defaults to the end of the possible keyspace) - key to stop iterating atlimit
(default unlimited) - how many rows to return before stopping
var changes = db.createChangesStream([opts])
Returns a read stream that iterates over the dat store change log (a log of all CRUD in the history of the database).
Changes are emitted as JS objects that look like {change: 352, key: 'foo', version: 2}
data
(defaultfalse
) - if true willget
the row data at the change version and include itchange.value
since
(default0
) - change ID to start fromtail
(defaultfalse
) - if true it will setsince
to the very last change so you only get new changeslimit
(default unlimited) - how many changes to return before stoppinglive
(defaultfalss
) - if true will emit new changes as they happen + never end (unless you manually end the stream)
var writeStream = db.createWriteStream([opts])
Returns a new write stream. You can write data to it. For every thing you write it will write back the success/fail status as a JS object.
You can write:
- raw CSV (e.g.
fs.createReadStream('data.csv')
) - raw line separated JSON objects
- JS objects (e.g.
objectMode
)
format
(defaults toobjectMode
), set this equal tojson
,csv
, orprotobuf
to tell the write stream how to parse the data you write to itcsv
- raw CSV data. setting to true is equivalent to{format: 'csv'}
json
- line-delimited JSON objects. setting to true is equivalent to{format: 'json'}
protobuf
- protocol buffers encoded binary data. setting to true is equivalent to{format: 'protobuf'}
primary
(defaultkey
) - the column or array of columns to use as the primary keyhash
(defaultfalse
) - if truekey
will be set to the md5 hex hash of the string of the primary key(s)primaryFormat
- a function that formats the key before it gets inserted. accepts(val)
and must return a string to set as the key.columns
- specify the column names to use when parsing multibuffer/csv. Mandatory for multibuffer, optional for csv (csv headers are automatically parsed but this can be used to override them)headerRow
(defaulttrue
) - set to false if your csv doesn't have a header row. you'll also have to manually specifycolumns
separator
(default,
) - passed to the csv parserdelimiter
(default\n
) - passed to the csv parser
var versions = db.createVersionStream(key, [opts])
Returns a read stream that emits all versions of a given key
start
(default 0) - version to start atend
(default infinity) - version to stop at
var blobWriter = db.createBlobWriteStream(filename, [row], [cb])
Returns a writable stream that you can stream a binary blob attachment into. Calls cb
with (err, updated)
where updated
is the new version of the row that the blob was attached to.
filename
may be either simply a string for the filename you want to save the blob as, or an options object e.g. {filename: 'foo.txt'}
. filename
will get passed to the underlying blob store backend as the options
argument.
If specified row
should be a JS object you want to attach the blob to, obeying the same update/conflict rules as db.put
. If not specified a new row will be created.
var blobWriter = db.createBlobReadStream(key, filename, [options])
Returns a readable stream of blob data.
key
is the key of the row where the blob is stored. filename
is the name of the attached blob. both are required.
version
(default latest) - the version of the row to get
dat.listen([port], [cb])
Starts the dat HTTP server. port
defaults to 6461
or the next largest available open port, cb
gets called with (err)
when the server has started/failed.
dat.clone(remote, [cb])
Initializes a new dat (if not already initialized) and makes a local replica of remote
in the folder where dat was instantiated. May be faster than dat.pull
if the remote server has faster clone capabilities (e.g. hyperleveldb's liveBackup
)
dat.push(remote, [cb])
Synchronizes local dat with a remote dat by pushing all changes to the remote dat over HTTP. Calls cb
with (err)
when done.
remote
should be the base HTTP address of the remote dat, e.g. http://localhost:6461
dat.push(remote, [cb])
Synchronizes local dat with a remote dat by pushing all changes to the remote dat over HTTP. Calls cb
with (err)
when done.
remote
should be the base HTTP address of the remote dat, e.g. http://localhost:6461
live
(defaultfalse
) - if true will keep the pull open forever and will receive new changes as they happen from the remotequiet
(defaultfalse
) - if true will suppress progress messages
dat.init(path, [cb])
Creates a new empty dat folder and database at path/.dat
. This method is called by default when you create a dat instance.
var paths = dat.paths(path)
Returns an object with the various absolute paths (calculated using path
as the base dir) for different parts of dat, e.g. the .dat
folder, the leveldb folder, the blob store.
dat.exists(path, cb)
Checks if path
has a dat store in it. Calls cb
with (err, exists)
where exists
is a boolean.
dat.close(cb)
Closes the http server, RPC client (if present), database and cleans up the .dat/PORT
file. Calls cb
with (err)
when done.
dat.destroy(path, cb)
Calls .close
and then destroys the .dat
folder in path
. Calls cb
with (err)
when done.
dat.cat()
Prints a line separated JSON serialized version of a dat.createReadStream()
(the newest version of all rows) to stdout.
dat.dump()
Prints the raw encoded key/value data from leveldb to stdout as line separated JSON. Used for debugging.
var number = dat.getRowCount()
Returns the current number of rows in the db.
var headers = dat.headers()
Returns an array of all the current column names in the db. Used for generating CSVs.
dat.config(path, cb)
Parses the contents of .dat/dat.json
for the dat at path
and calls cb
with (err, config)
.