Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: more formatting fixes #7727

Merged
merged 5 commits into from
Aug 4, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 27 additions & 23 deletions BUILDING.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ On FreeBSD and OpenBSD, you may also need:
* libexecinfo (FreeBSD and OpenBSD only)


```text
```console
$ ./configure
$ make
$ [sudo] make install
Expand All @@ -37,7 +37,7 @@ $ [sudo] make install
If your Python binary is in a non-standard location or has a
non-standard name, run the following instead:

```text
```console
$ export PYTHON=/path/to/python
$ $PYTHON ./configure
$ make
Expand All @@ -46,13 +46,13 @@ $ [sudo] make install

To run the tests:

```text
```console
$ make test
```

To run the native module tests:

```text
```console
$ make test-addons
```

Expand All @@ -61,35 +61,35 @@ To run the npm test suite:
*note: to run the suite on node v4 or earlier you must first*
*run `make install`*

```
```console
$ make test-npm
```

To build the documentation:

This will build Node.js first (if necessary) and then use it to build the docs:

```text
```console
$ make doc
```

If you have an existing Node.js you can build just the docs with:

```text
```console
$ NODE=node make doc-only
```

(Where `node` is the path to your executable.)

To read the documentation:

```text
```console
$ man doc/node.1
```

To test if Node.js was built correctly:

```
```console
$ node -e "console.log('Hello from Node.js ' + process.version)"
```

Expand All @@ -107,19 +107,19 @@ Prerequisites:
[Git for Windows](http://git-scm.com/download/win) includes Git Bash
and tools which can be included in the global `PATH`.

```text
```console
> vcbuild nosign
```

To run the tests:

```text
```console
> vcbuild test
```

To test if Node.js was built correctly:

```text
```console
> Release\node -e "console.log('Hello from Node.js', process.version)"
```

Expand All @@ -136,7 +136,7 @@ Be sure you have downloaded and extracted [Android NDK]
(https://developer.android.com/tools/sdk/ndk/index.html)
before in a folder. Then run:

```
```console
$ ./android-configure /path/to/your/android-ndk
$ make
```
Expand Down Expand Up @@ -165,13 +165,13 @@ Node.js source does not include all locales.)

##### Unix / OS X:

```text
```console
$ ./configure --with-intl=full-icu --download=all
```

##### Windows:

```text
```console
> vcbuild full-icu download-all
```

Expand All @@ -182,19 +182,19 @@ The `Intl` object will not be available, nor some other APIs such as

##### Unix / OS X:

```text
```console
$ ./configure --without-intl
```

##### Windows:

```text
```console
> vcbuild without-intl
```

#### Use existing installed ICU (Unix / OS X only):

```text
```console
$ pkg-config --modversion icu-i18n && ./configure --with-intl=system-icu
```

Expand All @@ -210,14 +210,18 @@ Download the file named something like `icu4c-**##.#**-src.tgz` (or

##### Unix / OS X

```text
# from an already-unpacked ICU:
From an already-unpacked ICU:
```console
$ ./configure --with-intl=[small-icu,full-icu] --with-icu-source=/path/to/icu
```

# from a local ICU tarball
From a local ICU tarball:
```console
$ ./configure --with-intl=[small-icu,full-icu] --with-icu-source=/path/to/icu.tgz
```

# from a tarball URL
From a tarball URL:
```console
$ ./configure --with-intl=full-icu --with-icu-source=http://url/to/icu.tgz
```

Expand All @@ -227,7 +231,7 @@ First unpack latest ICU to `deps/icu`
[icu4c-**##.#**-src.tgz](http://icu-project.org/download) (or `.zip`)
as `deps/icu` (You'll have: `deps/icu/source/...`)

```text
```console
> vcbuild full-icu
```

Expand Down
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ changed and why. Follow these guidelines when writing one:

A good commit log can look something like this:

```
```txt
subsystem: explaining the commit in one line

Body of commit message is a few lines of text, explaining things
Expand All @@ -122,7 +122,7 @@ what subsystem (or subsystems) your changes touch.
If your patch fixes an open issue, you can add a reference to it at the end
of the log. Use the `Fixes:` prefix and the full issue URL. For example:

```
```txt
Fixes: https://github.com/nodejs/node/issues/1337
```

Expand Down
27 changes: 13 additions & 14 deletions benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,6 @@ install.packages("ggplot2")
install.packages("plyr")
```

[wrk]: https://github.com/wg/wrk

## Running benchmarks

### Running individual benchmarks
Expand All @@ -43,7 +41,7 @@ conclusions about the performance.
Individual benchmarks can be executed by simply executing the benchmark script
with node.

```
```console
$ node benchmark/buffers/buffer-tostring.js

buffers/buffer-tostring.js n=10000000 len=0 arg=true: 62710590.393305704
Expand All @@ -65,7 +63,7 @@ measured in ops/sec (higher is better).**
Furthermore you can specify a subset of the configurations, by setting them in
the process arguments:

```
```console
$ node benchmark/buffers/buffer-tostring.js len=1024

buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 3498295.68561504
Expand All @@ -78,7 +76,7 @@ Similar to running individual benchmarks, a group of benchmarks can be executed
by using the `run.js` tool. Again this does not provide the statistical
information to make any conclusions.

```
```console
$ node benchmark/run.js arrays

arrays/var-int.js
Expand All @@ -98,7 +96,7 @@ arrays/zero-int.js n=25 type=Buffer: 90.49906662339653
```

It is possible to execute more groups by adding extra process arguments.
```
```console
$ node benchmark/run.js arrays buffers
```

Expand All @@ -119,13 +117,13 @@ First build two versions of node, one from the master branch (here called

The `compare.js` tool will then produce a csv file with the benchmark results.

```
```console
$ node benchmark/compare.js --old ./node-master --new ./node-pr-5134 string_decoder > compare-pr-5134.csv
```

For analysing the benchmark results use the `compare.R` tool.

```
```console
$ cat compare-pr-5134.csv | Rscript benchmark/compare.R

improvement significant p.value
Expand Down Expand Up @@ -159,16 +157,14 @@ _For the statistically minded, the R script performs an [independent/unpaired
same for both versions. The significant field will show a star if the p-value
is less than `0.05`._

[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes.2C_unequal_variances

The `compare.R` tool can also produce a box plot by using the `--plot filename`
option. In this case there are 48 different benchmark combinations, thus you
may want to filter the csv file. This can be done while benchmarking using the
`--set` parameter (e.g. `--set encoding=ascii`) or by filtering results
afterwards using tools such as `sed` or `grep`. In the `sed` case be sure to
keep the first line since that contains the header information.

```
```console
$ cat compare-pr-5134.csv | sed '1p;/encoding=ascii/!d' | Rscript benchmark/compare.R --plot compare-plot.png

improvement significant p.value
Expand All @@ -190,15 +186,15 @@ example to analyze the time complexity.
To do this use the `scatter.js` tool, this will run a benchmark multiple times
and generate a csv with the results.

```
```console
$ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
```

After generating the csv, a comparison table can be created using the
`scatter.R` tool. Even more useful it creates an actual scatter plot when using
the `--plot filename` option.

```
```console
$ cat scatter.csv | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log

aggregating variable: inlen
Expand Down Expand Up @@ -229,7 +225,7 @@ can be solved by filtering. This can be done while benchmarking using the
afterwards using tools such as `sed` or `grep`. In the `sed` case be
sure to keep the first line since that contains the header information.

```
```console
$ cat scatter.csv | sed -E '1p;/([^,]+, ){3}128,/!d' | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log

chunk encoding mean confidence.interval
Expand Down Expand Up @@ -290,3 +286,6 @@ function main(conf) {
bench.end(conf.n);
}
```

[wrk]: https://github.com/wg/wrk
[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes.2C_unequal_variances
Loading