From 89e72ad75a5593eff1e8e24e1896ee41006d3e02 Mon Sep 17 00:00:00 2001 From: Morten Piibeleht Date: Thu, 23 Aug 2018 09:38:42 +1200 Subject: [PATCH] Update canonical URLs in v1.0.0/ docs on gh-pages (#28821) * Update canonical URLs in v1.0.0/ docs find v1.0.0/ -name '*html' -exec sed -i 's/

Arrays

Arrays

Constructors and Types

AbstractArray{T,N}

Supertype for N-dimensional arrays (or array-like types) with elements of type T. Array and other types are subtypes of this. See the manual section on the AbstractArray interface.

source
AbstractVector{T}

Supertype for one-dimensional arrays (or array-like types) with elements of type T. Alias for AbstractArray{T,1}.

source
AbstractMatrix{T}

Supertype for two-dimensional arrays (or array-like types) with elements of type T. Alias for AbstractArray{T,2}.

source
Base.AbstractVecOrMatConstant.
AbstractVecOrMat{T}

Union type of AbstractVector{T} and AbstractMatrix{T}.

source
Core.ArrayType.
Array{T,N} <: AbstractArray{T,N}

N-dimensional dense array with elements of type T.

source
Core.ArrayMethod.
Array{T}(undef, dims)
+

Arrays

Arrays

Constructors and Types

AbstractArray{T,N}

Supertype for N-dimensional arrays (or array-like types) with elements of type T. Array and other types are subtypes of this. See the manual section on the AbstractArray interface.

source
AbstractVector{T}

Supertype for one-dimensional arrays (or array-like types) with elements of type T. Alias for AbstractArray{T,1}.

source
AbstractMatrix{T}

Supertype for two-dimensional arrays (or array-like types) with elements of type T. Alias for AbstractArray{T,2}.

source
Base.AbstractVecOrMatConstant.
AbstractVecOrMat{T}

Union type of AbstractVector{T} and AbstractMatrix{T}.

source
Core.ArrayType.
Array{T,N} <: AbstractArray{T,N}

N-dimensional dense array with elements of type T.

source
Core.ArrayMethod.
Array{T}(undef, dims)
 Array{T,N}(undef, dims)

Construct an uninitialized N-dimensional Array containing elements of type T. N can either be supplied explicitly, as in Array{T,N}(undef, dims), or be determined by the length or number of dims. dims may be a tuple or a series of integer arguments corresponding to the lengths in each dimension. If the rank N is supplied explicitly, then it must match the length or number of dims. See undef.

Examples

julia> A = Array{Float64,2}(undef, 2, 3) # N given explicitly
 2×3 Array{Float64,2}:
  6.90198e-310  6.90198e-310  6.90198e-310
diff --git a/en/stable/base/base/index.html b/en/stable/base/base/index.html
index 11cc38d5b0412..4793544b3ef8c 100644
--- a/en/stable/base/base/index.html
+++ b/en/stable/base/base/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Essentials

Essentials

Introduction

Julia Base contains a range of functions and macros appropriate for performing scientific and numerical computing, but is also as broad as those of many general purpose programming languages. Additional functionality is available from a growing collection of available packages. Functions are grouped by topic below.

Some general notes:

  • To use module functions, use import Module to import the module, and Module.fn(x) to use the functions.
  • Alternatively, using Module will import all exported Module functions into the current namespace.
  • By convention, function names ending with an exclamation point (!) modify their arguments. Some functions have both modifying (e.g., sort!) and non-modifying (sort) versions.

Getting Around

Base.exitFunction.
exit(code=0)

Stop the program with an exit code. The default exit code is zero, indicating that the program completed successfully. In an interactive session, exit() can be called with the keyboard shortcut ^D.

source
Base.atexitFunction.
atexit(f)

Register a zero-argument function f() to be called at process exit. atexit() hooks are called in last in first out (LIFO) order and run before object finalizers.

source
Base.isinteractiveFunction.
isinteractive() -> Bool

Determine whether Julia is running an interactive session.

source
Base.summarysizeFunction.
Base.summarysize(obj; exclude=Union{...}, chargeall=Union{...}) -> Int

Compute the amount of memory used by all unique objects reachable from the argument.

Keyword Arguments

  • exclude: specifies the types of objects to exclude from the traversal.
  • chargeall: specifies the types of objects to always charge the size of all of their fields, even if those fields would normally be excluded.
source
Base.requireFunction.
require(module::Symbol)

This function is part of the implementation of using / import, if a module is not already defined in Main. It can also be called directly to force reloading a module, regardless of whether it has been loaded before (for example, when interactively developing libraries).

Loads a source file, in the context of the Main module, on every active node, searching standard locations for files. require is considered a top-level operation, so it sets the current include path but does not use it to search for files (see help for include). This function is typically used to load library code, and is implicitly called by using to load packages.

When searching for files, require first looks for package code in the global array LOAD_PATH. require is case-sensitive on all platforms, including those with case-insensitive filesystems like macOS and Windows.

For more details regarding code loading, see the manual.

source
Base.compilecacheFunction.
Base.compilecache(module::PkgId)

Creates a precompiled cache file for a module and all of its dependencies. This can be used to reduce package load times. Cache files are stored in DEPOT_PATH[1]/compiled. See Module initialization and precompilation for important notes.

source
Base.__precompile__Function.
__precompile__(isprecompilable::Bool)

Specify whether the file calling this function is precompilable, defaulting to true. If a module or file is not safely precompilable, it should call __precompile__(false) in order to throw an error if Julia attempts to precompile it.

source
Base.includeFunction.
Base.include([m::Module,] path::AbstractString)

Evaluate the contents of the input source file in the global scope of module m. Every module (except those defined with baremodule) has its own 1-argument definition of include, which evaluates the file in that module. Returns the result of the last evaluated expression of the input file. During including, a task-local include path is set to the directory containing the file. Nested calls to include will search relative to that path. This function is typically used to load source interactively, or to combine files in packages that are broken into multiple source files.

source
include(path::AbstractString)

Evaluate the contents of the input source file in the global scope of the containing module. Every module (except those defined with baremodule) has its own 1-argument definition of include, which evaluates the file in that module. Returns the result of the last evaluated expression of the input file. During including, a task-local include path is set to the directory containing the file. Nested calls to include will search relative to that path. This function is typically used to load source interactively, or to combine files in packages that are broken into multiple source files.

Use Base.include to evaluate a file into another module.

source
Base.include_stringFunction.
include_string(m::Module, code::AbstractString, filename::AbstractString="string")

Like include, except reads code from the given string rather than from a file.

source
include_dependency(path::AbstractString)

In a module, declare that the file specified by path (relative or absolute) is a dependency for precompilation; that is, the module will need to be recompiled if this file changes.

This is only needed if your module depends on a file that is not used via include. It has no effect outside of compilation.

source
Base.whichMethod.
which(f, types)

Returns the method of f (a Method object) that would be called for arguments of the given types.

If types is an abstract type, then the method that would be called by invoke is returned.

source
Base.methodsFunction.
methods(f, [types])

Returns the method table for f.

If types is specified, returns an array of methods whose types match.

source
Base.@showMacro.
@show

Show an expression and result, returning the result.

source
ansKeyword.
ans

A variable referring to the last computed value, automatically set at the interactive prompt.

source

Keywords

moduleKeyword.
module

module declares a Module, which is a separate global variable workspace. Within a module, you can control which names from other modules are visible (via importing), and specify which of your names are intended to be public (via exporting). Modules allow you to create top-level definitions without worrying about name conflicts when your code is used together with somebody else’s. See the manual section about modules for more details.

Examples

module Foo
+

Essentials

Essentials

Introduction

Julia Base contains a range of functions and macros appropriate for performing scientific and numerical computing, but is also as broad as those of many general purpose programming languages. Additional functionality is available from a growing collection of available packages. Functions are grouped by topic below.

Some general notes:

  • To use module functions, use import Module to import the module, and Module.fn(x) to use the functions.
  • Alternatively, using Module will import all exported Module functions into the current namespace.
  • By convention, function names ending with an exclamation point (!) modify their arguments. Some functions have both modifying (e.g., sort!) and non-modifying (sort) versions.

Getting Around

Base.exitFunction.
exit(code=0)

Stop the program with an exit code. The default exit code is zero, indicating that the program completed successfully. In an interactive session, exit() can be called with the keyboard shortcut ^D.

source
Base.atexitFunction.
atexit(f)

Register a zero-argument function f() to be called at process exit. atexit() hooks are called in last in first out (LIFO) order and run before object finalizers.

source
Base.isinteractiveFunction.
isinteractive() -> Bool

Determine whether Julia is running an interactive session.

source
Base.summarysizeFunction.
Base.summarysize(obj; exclude=Union{...}, chargeall=Union{...}) -> Int

Compute the amount of memory used by all unique objects reachable from the argument.

Keyword Arguments

  • exclude: specifies the types of objects to exclude from the traversal.
  • chargeall: specifies the types of objects to always charge the size of all of their fields, even if those fields would normally be excluded.
source
Base.requireFunction.
require(module::Symbol)

This function is part of the implementation of using / import, if a module is not already defined in Main. It can also be called directly to force reloading a module, regardless of whether it has been loaded before (for example, when interactively developing libraries).

Loads a source file, in the context of the Main module, on every active node, searching standard locations for files. require is considered a top-level operation, so it sets the current include path but does not use it to search for files (see help for include). This function is typically used to load library code, and is implicitly called by using to load packages.

When searching for files, require first looks for package code in the global array LOAD_PATH. require is case-sensitive on all platforms, including those with case-insensitive filesystems like macOS and Windows.

For more details regarding code loading, see the manual.

source
Base.compilecacheFunction.
Base.compilecache(module::PkgId)

Creates a precompiled cache file for a module and all of its dependencies. This can be used to reduce package load times. Cache files are stored in DEPOT_PATH[1]/compiled. See Module initialization and precompilation for important notes.

source
Base.__precompile__Function.
__precompile__(isprecompilable::Bool)

Specify whether the file calling this function is precompilable, defaulting to true. If a module or file is not safely precompilable, it should call __precompile__(false) in order to throw an error if Julia attempts to precompile it.

source
Base.includeFunction.
Base.include([m::Module,] path::AbstractString)

Evaluate the contents of the input source file in the global scope of module m. Every module (except those defined with baremodule) has its own 1-argument definition of include, which evaluates the file in that module. Returns the result of the last evaluated expression of the input file. During including, a task-local include path is set to the directory containing the file. Nested calls to include will search relative to that path. This function is typically used to load source interactively, or to combine files in packages that are broken into multiple source files.

source
include(path::AbstractString)

Evaluate the contents of the input source file in the global scope of the containing module. Every module (except those defined with baremodule) has its own 1-argument definition of include, which evaluates the file in that module. Returns the result of the last evaluated expression of the input file. During including, a task-local include path is set to the directory containing the file. Nested calls to include will search relative to that path. This function is typically used to load source interactively, or to combine files in packages that are broken into multiple source files.

Use Base.include to evaluate a file into another module.

source
Base.include_stringFunction.
include_string(m::Module, code::AbstractString, filename::AbstractString="string")

Like include, except reads code from the given string rather than from a file.

source
include_dependency(path::AbstractString)

In a module, declare that the file specified by path (relative or absolute) is a dependency for precompilation; that is, the module will need to be recompiled if this file changes.

This is only needed if your module depends on a file that is not used via include. It has no effect outside of compilation.

source
Base.whichMethod.
which(f, types)

Returns the method of f (a Method object) that would be called for arguments of the given types.

If types is an abstract type, then the method that would be called by invoke is returned.

source
Base.methodsFunction.
methods(f, [types])

Returns the method table for f.

If types is specified, returns an array of methods whose types match.

source
Base.@showMacro.
@show

Show an expression and result, returning the result.

source
ansKeyword.
ans

A variable referring to the last computed value, automatically set at the interactive prompt.

source

Keywords

moduleKeyword.
module

module declares a Module, which is a separate global variable workspace. Within a module, you can control which names from other modules are visible (via importing), and specify which of your names are intended to be public (via exporting). Modules allow you to create top-level definitions without worrying about name conflicts when your code is used together with somebody else’s. See the manual section about modules for more details.

Examples

module Foo
 import Base.show
 export MyType, foo
 
diff --git a/en/stable/base/c/index.html b/en/stable/base/c/index.html
index 4045d60e5c8f3..3e45f5092021e 100644
--- a/en/stable/base/c/index.html
+++ b/en/stable/base/c/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

C Interface

C Interface

ccallKeyword.
ccall((function_name, library), returntype, (argtype1, ...), argvalue1, ...)
+

C Interface

C Interface

ccallKeyword.
ccall((function_name, library), returntype, (argtype1, ...), argvalue1, ...)
 ccall(function_pointer, returntype, (argtype1, ...), argvalue1, ...)

Call a function in a C-exported shared library, specified by the tuple (function_name, library), where each component is either a string or symbol. Alternatively, ccall may also be used to call a function pointer function_pointer, such as one returned by dlsym.

Note that the argument type tuple must be a literal tuple, and not a tuple-valued variable or expression.

Each argvalue to the ccall will be converted to the corresponding argtype, by automatic insertion of calls to unsafe_convert(argtype, cconvert(argtype, argvalue)). (See also the documentation for unsafe_convert and cconvert for further details.) In most cases, this simply results in a call to convert(argtype, argvalue).

source
cglobal((symbol, library) [, type=Cvoid])

Obtain a pointer to a global variable in a C-exported shared library, specified exactly as in ccall. Returns a Ptr{Type}, defaulting to Ptr{Cvoid} if no Type argument is supplied. The values can be read or written by unsafe_load or unsafe_store!, respectively.

source
Base.@cfunctionMacro.
@cfunction(callable, ReturnType, (ArgumentTypes...,)) -> Ptr{Cvoid}
 @cfunction($callable, ReturnType, (ArgumentTypes...,)) -> CFunction

Generate a C-callable function pointer from the Julia function closure for the given type signature. To pass the return value to a ccall, use the argument type Ptr{Cvoid} in the signature.

Note that the argument type tuple must be a literal tuple, and not a tuple-valued variable or expression (although it can include a splat expression). And that these arguments will be evaluated in global scope during compile-time (not deferred until runtime). Adding a '$' in front of the function argument changes this to instead create a runtime closure over the local variable callable.

See manual section on ccall and cfunction usage.

Examples

julia> function foo(x::Int, y::Int)
            return x + y
diff --git a/en/stable/base/collections/index.html b/en/stable/base/collections/index.html
index 4782865629675..8db41db828cd5 100644
--- a/en/stable/base/collections/index.html
+++ b/en/stable/base/collections/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Collections and Data Structures

Collections and Data Structures

Iteration

Sequential iteration is implemented by the iterate function. The general for loop:

for i in iter   # or  "for i = iter"
+

Collections and Data Structures

Collections and Data Structures

Iteration

Sequential iteration is implemented by the iterate function. The general for loop:

for i in iter   # or  "for i = iter"
     # body
 end

is translated into:

next = iterate(iter)
 while next !== nothing
diff --git a/en/stable/base/constants/index.html b/en/stable/base/constants/index.html
index 4d5a776e1f8b4..e67df560b5305 100644
--- a/en/stable/base/constants/index.html
+++ b/en/stable/base/constants/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Constants

Constants

Core.nothingConstant.
nothing

The singleton instance of type Nothing, used by convention when there is no value to return (as in a C void function) or when a variable or field holds no value.

source
Base.PROGRAM_FILEConstant.
PROGRAM_FILE

A string containing the script name passed to Julia from the command line. Note that the script name remains unchanged from within included files. Alternatively see @__FILE__.

source
Base.ARGSConstant.
ARGS

An array of the command line arguments passed to Julia, as strings.

source
Base.C_NULLConstant.
C_NULL

The C null pointer constant, sometimes used when calling external code.

source
Base.VERSIONConstant.
VERSION

A VersionNumber object describing which version of Julia is in use. For details see Version Number Literals.

source
Base.LOAD_PATHConstant.
LOAD_PATH

An array of paths for using and import statements to consdier as project environments or package directories when loading code. See Code Loading.

source
Base.Sys.BINDIRConstant.
Sys.BINDIR

A string containing the full path to the directory containing the julia executable.

source
Base.Sys.CPU_THREADSConstant.
Sys.CPU_THREADS

The number of logical CPU cores available in the system, i.e. the number of threads that the CPU can run concurrently. Note that this is not necessarily the number of CPU cores, for example, in the presence of hyper-threading.

See Hwloc.jl or CpuId.jl for extended information, including number of physical cores.

source
Base.Sys.WORD_SIZEConstant.
Sys.WORD_SIZE

Standard word size on the current machine, in bits.

source
Base.Sys.KERNELConstant.
Sys.KERNEL

A symbol representing the name of the operating system, as returned by uname of the build configuration.

source
Base.Sys.ARCHConstant.
Sys.ARCH

A symbol representing the architecture of the build configuration.

source
Base.Sys.MACHINEConstant.
Sys.MACHINE

A string containing the build triple.

source

See also:

+

Constants

Constants

Core.nothingConstant.
nothing

The singleton instance of type Nothing, used by convention when there is no value to return (as in a C void function) or when a variable or field holds no value.

source
Base.PROGRAM_FILEConstant.
PROGRAM_FILE

A string containing the script name passed to Julia from the command line. Note that the script name remains unchanged from within included files. Alternatively see @__FILE__.

source
Base.ARGSConstant.
ARGS

An array of the command line arguments passed to Julia, as strings.

source
Base.C_NULLConstant.
C_NULL

The C null pointer constant, sometimes used when calling external code.

source
Base.VERSIONConstant.
VERSION

A VersionNumber object describing which version of Julia is in use. For details see Version Number Literals.

source
Base.LOAD_PATHConstant.
LOAD_PATH

An array of paths for using and import statements to consdier as project environments or package directories when loading code. See Code Loading.

source
Base.Sys.BINDIRConstant.
Sys.BINDIR

A string containing the full path to the directory containing the julia executable.

source
Base.Sys.CPU_THREADSConstant.
Sys.CPU_THREADS

The number of logical CPU cores available in the system, i.e. the number of threads that the CPU can run concurrently. Note that this is not necessarily the number of CPU cores, for example, in the presence of hyper-threading.

See Hwloc.jl or CpuId.jl for extended information, including number of physical cores.

source
Base.Sys.WORD_SIZEConstant.
Sys.WORD_SIZE

Standard word size on the current machine, in bits.

source
Base.Sys.KERNELConstant.
Sys.KERNEL

A symbol representing the name of the operating system, as returned by uname of the build configuration.

source
Base.Sys.ARCHConstant.
Sys.ARCH

A symbol representing the architecture of the build configuration.

source
Base.Sys.MACHINEConstant.
Sys.MACHINE

A string containing the build triple.

source

See also:

diff --git a/en/stable/base/file/index.html b/en/stable/base/file/index.html index f6c8a8fb9db28..609577bb8be41 100644 --- a/en/stable/base/file/index.html +++ b/en/stable/base/file/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Filesystem

Filesystem

Base.Filesystem.pwdFunction.
pwd() -> AbstractString

Get the current working directory.

Examples

julia> pwd()
+

Filesystem

Filesystem

Base.Filesystem.pwdFunction.
pwd() -> AbstractString

Get the current working directory.

Examples

julia> pwd()
 "/home/JuliaUser"
 
 julia> cd("/home/JuliaUser/Projects/julia")
diff --git a/en/stable/base/io-network/index.html b/en/stable/base/io-network/index.html
index 240416c35cc25..22724c41beab9 100644
--- a/en/stable/base/io-network/index.html
+++ b/en/stable/base/io-network/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

I/O and Network

I/O and Network

General I/O

Base.stdoutConstant.
stdout

Global variable referring to the standard out stream.

source
Base.stderrConstant.
stderr

Global variable referring to the standard error stream.

source
Base.stdinConstant.
stdin

Global variable referring to the standard input stream.

source
Base.openFunction.
open(filename::AbstractString; keywords...) -> IOStream

Open a file in a mode specified by five boolean keyword arguments:

KeywordDescriptionDefault
readopen for reading!write
writeopen for writingtruncate | append
createcreate if non-existent!read & write | truncate | append
truncatetruncate to zero size!read & write
appendseek to endfalse

The default when no keywords are passed is to open files for reading only. Returns a stream for accessing the opened file.

source
open(filename::AbstractString, [mode::AbstractString]) -> IOStream

Alternate syntax for open, where a string-based mode specifier is used instead of the five booleans. The values of mode correspond to those from fopen(3) or Perl open, and are equivalent to setting the following boolean groups:

ModeDescriptionKeywords
rreadnone
wwrite, create, truncatewrite = true
awrite, create, appendappend = true
r+read, writeread = true, write = true
w+read, write, create, truncatetruncate = true, read = true
a+read, write, create, appendappend = true, read = true

Examples

julia> io = open("myfile.txt", "w");
+

I/O and Network

I/O and Network

General I/O

Base.stdoutConstant.
stdout

Global variable referring to the standard out stream.

source
Base.stderrConstant.
stderr

Global variable referring to the standard error stream.

source
Base.stdinConstant.
stdin

Global variable referring to the standard input stream.

source
Base.openFunction.
open(filename::AbstractString; keywords...) -> IOStream

Open a file in a mode specified by five boolean keyword arguments:

KeywordDescriptionDefault
readopen for reading!write
writeopen for writingtruncate | append
createcreate if non-existent!read & write | truncate | append
truncatetruncate to zero size!read & write
appendseek to endfalse

The default when no keywords are passed is to open files for reading only. Returns a stream for accessing the opened file.

source
open(filename::AbstractString, [mode::AbstractString]) -> IOStream

Alternate syntax for open, where a string-based mode specifier is used instead of the five booleans. The values of mode correspond to those from fopen(3) or Perl open, and are equivalent to setting the following boolean groups:

ModeDescriptionKeywords
rreadnone
wwrite, create, truncatewrite = true
awrite, create, appendappend = true
r+read, writeread = true, write = true
w+read, write, create, truncatetruncate = true, read = true
a+read, write, create, appendappend = true, read = true

Examples

julia> io = open("myfile.txt", "w");
 
 julia> write(io, "Hello world!");
 
diff --git a/en/stable/base/iterators/index.html b/en/stable/base/iterators/index.html
index 9564e441c57b0..68fe0a3d31b9d 100644
--- a/en/stable/base/iterators/index.html
+++ b/en/stable/base/iterators/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Iteration utilities

Iteration utilities

Stateful(itr)

There are several different ways to think about this iterator wrapper:

  1. It provides a mutable wrapper around an iterator and its iteration state.
  2. It turns an iterator-like abstraction into a Channel-like abstraction.
  3. It's an iterator that mutates to become its own rest iterator whenever an item is produced.

Stateful provides the regular iterator interface. Like other mutable iterators (e.g. Channel), if iteration is stopped early (e.g. by a break in a for loop), iteration can be resumed from the same spot by continuing to iterate over the same iterator object (in contrast, an immutable iterator would restart from the beginning).

Examples

julia> a = Iterators.Stateful("abcdef");
+

Iteration utilities

Iteration utilities

Stateful(itr)

There are several different ways to think about this iterator wrapper:

  1. It provides a mutable wrapper around an iterator and its iteration state.
  2. It turns an iterator-like abstraction into a Channel-like abstraction.
  3. It's an iterator that mutates to become its own rest iterator whenever an item is produced.

Stateful provides the regular iterator interface. Like other mutable iterators (e.g. Channel), if iteration is stopped early (e.g. by a break in a for loop), iteration can be resumed from the same spot by continuing to iterate over the same iterator object (in contrast, an immutable iterator would restart from the beginning).

Examples

julia> a = Iterators.Stateful("abcdef");
 
 julia> isempty(a)
 false
diff --git a/en/stable/base/libc/index.html b/en/stable/base/libc/index.html
index 737e2e7a6238c..8e7865bfef978 100644
--- a/en/stable/base/libc/index.html
+++ b/en/stable/base/libc/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

C Standard Library

C Standard Library

Base.Libc.mallocFunction.
malloc(size::Integer) -> Ptr{Cvoid}

Call malloc from the C standard library.

source
Base.Libc.callocFunction.
calloc(num::Integer, size::Integer) -> Ptr{Cvoid}

Call calloc from the C standard library.

source
Base.Libc.reallocFunction.
realloc(addr::Ptr, size::Integer) -> Ptr{Cvoid}

Call realloc from the C standard library.

See warning in the documentation for free regarding only using this on memory originally obtained from malloc.

source
Base.Libc.freeFunction.
free(addr::Ptr)

Call free from the C standard library. Only use this on memory obtained from malloc, not on pointers retrieved from other C libraries. Ptr objects obtained from C libraries should be freed by the free functions defined in that library, to avoid assertion failures if multiple libc libraries exist on the system.

source
Base.Libc.errnoFunction.
errno([code])

Get the value of the C library's errno. If an argument is specified, it is used to set the value of errno.

The value of errno is only valid immediately after a ccall to a C library routine that sets it. Specifically, you cannot call errno at the next prompt in a REPL, because lots of code is executed between prompts.

source
Base.Libc.strerrorFunction.
strerror(n=errno())

Convert a system call error code to a descriptive string

source
GetLastError()

Call the Win32 GetLastError function [only available on Windows].

source
FormatMessage(n=GetLastError())

Convert a Win32 system call error code to a descriptive string [only available on Windows].

source
Base.Libc.timeMethod.
time(t::TmStruct)

Converts a TmStruct struct to a number of seconds since the epoch.

source
Base.Libc.strftimeFunction.
strftime([format], time)

Convert time, given as a number of seconds since the epoch or a TmStruct, to a formatted string using the given format. Supported formats are the same as those in the standard C library.

source
Base.Libc.strptimeFunction.
strptime([format], timestr)

Parse a formatted time string into a TmStruct giving the seconds, minute, hour, date, etc. Supported formats are the same as those in the standard C library. On some platforms, timezones will not be parsed correctly. If the result of this function will be passed to time to convert it to seconds since the epoch, the isdst field should be filled in manually. Setting it to -1 will tell the C library to use the current system settings to determine the timezone.

source
TmStruct([seconds])

Convert a number of seconds since the epoch to broken-down format, with fields sec, min, hour, mday, month, year, wday, yday, and isdst.

source
flush_cstdio()

Flushes the C stdout and stderr streams (which may have been written to by external C code).

source
+

C Standard Library

C Standard Library

Base.Libc.mallocFunction.
malloc(size::Integer) -> Ptr{Cvoid}

Call malloc from the C standard library.

source
Base.Libc.callocFunction.
calloc(num::Integer, size::Integer) -> Ptr{Cvoid}

Call calloc from the C standard library.

source
Base.Libc.reallocFunction.
realloc(addr::Ptr, size::Integer) -> Ptr{Cvoid}

Call realloc from the C standard library.

See warning in the documentation for free regarding only using this on memory originally obtained from malloc.

source
Base.Libc.freeFunction.
free(addr::Ptr)

Call free from the C standard library. Only use this on memory obtained from malloc, not on pointers retrieved from other C libraries. Ptr objects obtained from C libraries should be freed by the free functions defined in that library, to avoid assertion failures if multiple libc libraries exist on the system.

source
Base.Libc.errnoFunction.
errno([code])

Get the value of the C library's errno. If an argument is specified, it is used to set the value of errno.

The value of errno is only valid immediately after a ccall to a C library routine that sets it. Specifically, you cannot call errno at the next prompt in a REPL, because lots of code is executed between prompts.

source
Base.Libc.strerrorFunction.
strerror(n=errno())

Convert a system call error code to a descriptive string

source
GetLastError()

Call the Win32 GetLastError function [only available on Windows].

source
FormatMessage(n=GetLastError())

Convert a Win32 system call error code to a descriptive string [only available on Windows].

source
Base.Libc.timeMethod.
time(t::TmStruct)

Converts a TmStruct struct to a number of seconds since the epoch.

source
Base.Libc.strftimeFunction.
strftime([format], time)

Convert time, given as a number of seconds since the epoch or a TmStruct, to a formatted string using the given format. Supported formats are the same as those in the standard C library.

source
Base.Libc.strptimeFunction.
strptime([format], timestr)

Parse a formatted time string into a TmStruct giving the seconds, minute, hour, date, etc. Supported formats are the same as those in the standard C library. On some platforms, timezones will not be parsed correctly. If the result of this function will be passed to time to convert it to seconds since the epoch, the isdst field should be filled in manually. Setting it to -1 will tell the C library to use the current system settings to determine the timezone.

source
TmStruct([seconds])

Convert a number of seconds since the epoch to broken-down format, with fields sec, min, hour, mday, month, year, wday, yday, and isdst.

source
flush_cstdio()

Flushes the C stdout and stderr streams (which may have been written to by external C code).

source
diff --git a/en/stable/base/math/index.html b/en/stable/base/math/index.html index 540930a516075..58b7ec3b63688 100644 --- a/en/stable/base/math/index.html +++ b/en/stable/base/math/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Mathematics

Mathematics

Mathematical Operators

Base.:-Method.
-(x)

Unary minus operator.

Examples

julia> -1
+

Mathematics

Mathematics

Mathematical Operators

Base.:-Method.
-(x)

Unary minus operator.

Examples

julia> -1
 -1
 
 julia> -(2)
diff --git a/en/stable/base/multi-threading/index.html b/en/stable/base/multi-threading/index.html
index 213c1d553fdf1..0cfef5e281aed 100644
--- a/en/stable/base/multi-threading/index.html
+++ b/en/stable/base/multi-threading/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Multi-Threading

Multi-Threading

This experimental interface supports Julia's multi-threading capabilities. Types and functions described here might (and likely will) change in the future.

Base.Threads.threadidFunction.
Threads.threadid()

Get the ID number of the current thread of execution. The master thread has ID 1.

source
Base.Threads.nthreadsFunction.
Threads.nthreads()

Get the number of threads available to the Julia process. This is the inclusive upper bound on threadid().

source
Threads.@threads

A macro to parallelize a for-loop to run with multiple threads. This spawns nthreads() number of threads, splits the iteration space amongst them, and iterates in parallel. A barrier is placed at the end of the loop which waits for all the threads to finish execution, and the loop returns.

source
Threads.Atomic{T}

Holds a reference to an object of type T, ensuring that it is only accessed atomically, i.e. in a thread-safe manner.

Only certain "simple" types can be used atomically, namely the primitive boolean, integer, and float-point types. These are Bool, Int8...Int128, UInt8...UInt128, and Float16...Float64.

New atomic objects can be created from a non-atomic values; if none is specified, the atomic object is initialized with zero.

Atomic objects can be accessed using the [] notation:

Examples

julia> x = Threads.Atomic{Int}(3)
+

Multi-Threading

Multi-Threading

This experimental interface supports Julia's multi-threading capabilities. Types and functions described here might (and likely will) change in the future.

Base.Threads.threadidFunction.
Threads.threadid()

Get the ID number of the current thread of execution. The master thread has ID 1.

source
Base.Threads.nthreadsFunction.
Threads.nthreads()

Get the number of threads available to the Julia process. This is the inclusive upper bound on threadid().

source
Threads.@threads

A macro to parallelize a for-loop to run with multiple threads. This spawns nthreads() number of threads, splits the iteration space amongst them, and iterates in parallel. A barrier is placed at the end of the loop which waits for all the threads to finish execution, and the loop returns.

source
Threads.Atomic{T}

Holds a reference to an object of type T, ensuring that it is only accessed atomically, i.e. in a thread-safe manner.

Only certain "simple" types can be used atomically, namely the primitive boolean, integer, and float-point types. These are Bool, Int8...Int128, UInt8...UInt128, and Float16...Float64.

New atomic objects can be created from a non-atomic values; if none is specified, the atomic object is initialized with zero.

Atomic objects can be accessed using the [] notation:

Examples

julia> x = Threads.Atomic{Int}(3)
 Base.Threads.Atomic{Int64}(3)
 
 julia> x[] = 1
diff --git a/en/stable/base/numbers/index.html b/en/stable/base/numbers/index.html
index 62c0d275d6b3d..a0151a7724a4d 100644
--- a/en/stable/base/numbers/index.html
+++ b/en/stable/base/numbers/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Numbers

Numbers

Standard Numeric Types

Abstract number types

Core.NumberType.
Number

Abstract supertype for all number types.

source
Core.RealType.
Real <: Number

Abstract supertype for all real numbers.

source
AbstractFloat <: Real

Abstract supertype for all floating point numbers.

source
Core.IntegerType.
Integer <: Real

Abstract supertype for all integers.

source
Core.SignedType.
Signed <: Integer

Abstract supertype for all signed integers.

source
Core.UnsignedType.
Unsigned <: Integer

Abstract supertype for all unsigned integers.

source
AbstractIrrational <: Real

Number type representing an exact irrational value.

source

Concrete number types

Core.Float16Type.
Float16 <: AbstractFloat

16-bit floating point number type.

source
Core.Float32Type.
Float32 <: AbstractFloat

32-bit floating point number type.

source
Core.Float64Type.
Float64 <: AbstractFloat

64-bit floating point number type.

source
BigFloat <: AbstractFloat

Arbitrary precision floating point number type.

source
Core.BoolType.
Bool <: Integer

Boolean type.

source
Core.Int8Type.
Int8 <: Signed

8-bit signed integer type.

source
Core.UInt8Type.
UInt8 <: Unsigned

8-bit unsigned integer type.

source
Core.Int16Type.
Int16 <: Signed

16-bit signed integer type.

source
Core.UInt16Type.
UInt16 <: Unsigned

16-bit unsigned integer type.

source
Core.Int32Type.
Int32 <: Signed

32-bit signed integer type.

source
Core.UInt32Type.
UInt32 <: Unsigned

32-bit unsigned integer type.

source
Core.Int64Type.
Int64 <: Signed

64-bit signed integer type.

source
Core.UInt64Type.
UInt64 <: Unsigned

64-bit unsigned integer type.

source
Core.Int128Type.
Int128 <: Signed

128-bit signed integer type.

source
Core.UInt128Type.
UInt128 <: Unsigned

128-bit unsigned integer type.

source
Base.GMP.BigIntType.
BigInt <: Signed

Arbitrary precision integer type.

source
Base.ComplexType.
Complex{T<:Real} <: Number

Complex number type with real and imaginary part of type T.

ComplexF16, ComplexF32 and ComplexF64 are aliases for Complex{Float16}, Complex{Float32} and Complex{Float64} respectively.

source
Base.RationalType.
Rational{T<:Integer} <: Real

Rational number type, with numerator and denominator of type T.

source
Base.IrrationalType.
Irrational{sym} <: AbstractIrrational

Number type representing an exact irrational value denoted by the symbol sym.

source

Data Formats

Base.digitsFunction.
digits([T<:Integer], n::Integer; base::T = 10, pad::Integer = 1)

Return an array with element type T (default Int) of the digits of n in the given base, optionally padded with zeros to a specified size. More significant digits are at higher indices, such that n == sum([digits[k]*base^(k-1) for k=1:length(digits)]).

Examples

julia> digits(10, base = 10)
+

Numbers

Numbers

Standard Numeric Types

Abstract number types

Core.NumberType.
Number

Abstract supertype for all number types.

source
Core.RealType.
Real <: Number

Abstract supertype for all real numbers.

source
AbstractFloat <: Real

Abstract supertype for all floating point numbers.

source
Core.IntegerType.
Integer <: Real

Abstract supertype for all integers.

source
Core.SignedType.
Signed <: Integer

Abstract supertype for all signed integers.

source
Core.UnsignedType.
Unsigned <: Integer

Abstract supertype for all unsigned integers.

source
AbstractIrrational <: Real

Number type representing an exact irrational value.

source

Concrete number types

Core.Float16Type.
Float16 <: AbstractFloat

16-bit floating point number type.

source
Core.Float32Type.
Float32 <: AbstractFloat

32-bit floating point number type.

source
Core.Float64Type.
Float64 <: AbstractFloat

64-bit floating point number type.

source
BigFloat <: AbstractFloat

Arbitrary precision floating point number type.

source
Core.BoolType.
Bool <: Integer

Boolean type.

source
Core.Int8Type.
Int8 <: Signed

8-bit signed integer type.

source
Core.UInt8Type.
UInt8 <: Unsigned

8-bit unsigned integer type.

source
Core.Int16Type.
Int16 <: Signed

16-bit signed integer type.

source
Core.UInt16Type.
UInt16 <: Unsigned

16-bit unsigned integer type.

source
Core.Int32Type.
Int32 <: Signed

32-bit signed integer type.

source
Core.UInt32Type.
UInt32 <: Unsigned

32-bit unsigned integer type.

source
Core.Int64Type.
Int64 <: Signed

64-bit signed integer type.

source
Core.UInt64Type.
UInt64 <: Unsigned

64-bit unsigned integer type.

source
Core.Int128Type.
Int128 <: Signed

128-bit signed integer type.

source
Core.UInt128Type.
UInt128 <: Unsigned

128-bit unsigned integer type.

source
Base.GMP.BigIntType.
BigInt <: Signed

Arbitrary precision integer type.

source
Base.ComplexType.
Complex{T<:Real} <: Number

Complex number type with real and imaginary part of type T.

ComplexF16, ComplexF32 and ComplexF64 are aliases for Complex{Float16}, Complex{Float32} and Complex{Float64} respectively.

source
Base.RationalType.
Rational{T<:Integer} <: Real

Rational number type, with numerator and denominator of type T.

source
Base.IrrationalType.
Irrational{sym} <: AbstractIrrational

Number type representing an exact irrational value denoted by the symbol sym.

source

Data Formats

Base.digitsFunction.
digits([T<:Integer], n::Integer; base::T = 10, pad::Integer = 1)

Return an array with element type T (default Int) of the digits of n in the given base, optionally padded with zeros to a specified size. More significant digits are at higher indices, such that n == sum([digits[k]*base^(k-1) for k=1:length(digits)]).

Examples

julia> digits(10, base = 10)
 2-element Array{Int64,1}:
  0
  1
diff --git a/en/stable/base/parallel/index.html b/en/stable/base/parallel/index.html
index 5fa7a88f2cafe..957a0174d04d9 100644
--- a/en/stable/base/parallel/index.html
+++ b/en/stable/base/parallel/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Tasks

Tasks

Core.TaskType.
Task(func)

Create a Task (i.e. coroutine) to execute the given function func (which must be callable with no arguments). The task exits when this function returns.

Examples

julia> a() = sum(i for i in 1:1000);
+

Tasks

Tasks

Core.TaskType.
Task(func)

Create a Task (i.e. coroutine) to execute the given function func (which must be callable with no arguments). The task exits when this function returns.

Examples

julia> a() = sum(i for i in 1:1000);
 
 julia> b = Task(a);

In this example, b is a runnable Task that hasn't started yet.

source
Base.current_taskFunction.
current_task()

Get the currently running Task.

source
Base.istaskdoneFunction.
istaskdone(t::Task) -> Bool

Determine whether a task has exited.

Examples

julia> a2() = sum(i for i in 1:1000);
 
diff --git a/en/stable/base/punctuation/index.html b/en/stable/base/punctuation/index.html
index 5c337ea9a00d0..0cbf28ed54251 100644
--- a/en/stable/base/punctuation/index.html
+++ b/en/stable/base/punctuation/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Punctuation

Punctuation

Extended documentation for mathematical symbols & functions is here.

symbolmeaning
@minvoke macro m; followed by space-separated expressions
!prefix "not" (logical negation) operator
a!( )at the end of a function name, ! is used as a convention to indicate that a function modifies its argument(s)
#begin single line comment
#=begin multi-line comment (these are nestable)
=#end multi-line comment
$string and expression interpolation
%remainder operator
^exponent operator
&bitwise and
&&short-circuiting boolean and
|bitwise or
||short-circuiting boolean or
bitwise xor operator
*multiply, or matrix multiply
()the empty tuple
~bitwise not operator
\backslash operator
'complex transpose operator Aᴴ
a[]array indexing (calling getindex or setindex!)
[,]vector literal constructor (calling vect)
[;]vertical concatenation (calling vcat or hvcat)
[   ]with space-separated expressions, horizontal concatenation (calling hcat or hvcat)
T{ }parametric type instantiation
;statement separator
,separate function arguments or tuple components
?3-argument conditional operator (used like: conditional ? if_true : if_false)
""delimit string literals
''delimit character literals
` `delimit external process (command) specifications
...splice arguments into a function call or declare a varargs function
.access named fields in objects/modules (calling getproperty or setproperty!), also prefixes elementwise function calls (calling broadcast)
a:brange a, a+1, a+2, ..., b
a:s:brange a, a+s, a+2s, ..., b
:index an entire dimension (firstindex:lastindex), see Colon)
::type annotation or typeassert, depending on context
:( )quoted expression
:asymbol a
<:subtype operator
>:supertype operator (reverse of subtype operator)
===egal comparison operator
+

Punctuation

Punctuation

Extended documentation for mathematical symbols & functions is here.

symbolmeaning
@minvoke macro m; followed by space-separated expressions
!prefix "not" (logical negation) operator
a!( )at the end of a function name, ! is used as a convention to indicate that a function modifies its argument(s)
#begin single line comment
#=begin multi-line comment (these are nestable)
=#end multi-line comment
$string and expression interpolation
%remainder operator
^exponent operator
&bitwise and
&&short-circuiting boolean and
|bitwise or
||short-circuiting boolean or
bitwise xor operator
*multiply, or matrix multiply
()the empty tuple
~bitwise not operator
\backslash operator
'complex transpose operator Aᴴ
a[]array indexing (calling getindex or setindex!)
[,]vector literal constructor (calling vect)
[;]vertical concatenation (calling vcat or hvcat)
[   ]with space-separated expressions, horizontal concatenation (calling hcat or hvcat)
T{ }parametric type instantiation
;statement separator
,separate function arguments or tuple components
?3-argument conditional operator (used like: conditional ? if_true : if_false)
""delimit string literals
''delimit character literals
` `delimit external process (command) specifications
...splice arguments into a function call or declare a varargs function
.access named fields in objects/modules (calling getproperty or setproperty!), also prefixes elementwise function calls (calling broadcast)
a:brange a, a+1, a+2, ..., b
a:s:brange a, a+s, a+2s, ..., b
:index an entire dimension (firstindex:lastindex), see Colon)
::type annotation or typeassert, depending on context
:( )quoted expression
:asymbol a
<:subtype operator
>:supertype operator (reverse of subtype operator)
===egal comparison operator
diff --git a/en/stable/base/simd-types/index.html b/en/stable/base/simd-types/index.html index 81eaafe1d0c6c..616a32741d633 100644 --- a/en/stable/base/simd-types/index.html +++ b/en/stable/base/simd-types/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

SIMD Support

SIMD Support

Type VecElement{T} is intended for building libraries of SIMD operations. Practical use of it requires using llvmcall. The type is defined as:

struct VecElement{T}
+

SIMD Support

SIMD Support

Type VecElement{T} is intended for building libraries of SIMD operations. Practical use of it requires using llvmcall. The type is defined as:

struct VecElement{T}
     value::T
 end

It has a special compilation rule: a homogeneous tuple of VecElement{T} maps to an LLVM vector type when T is a primitive bits type and the tuple length is in the set {2-6,8-10,16}.

At -O3, the compiler might automatically vectorize operations on such tuples. For example, the following program, when compiled with julia -O3 generates two SIMD addition instructions (addps) on x86 systems:

const m128 = NTuple{4,VecElement{Float32}}
 
diff --git a/en/stable/base/sort/index.html b/en/stable/base/sort/index.html
index 7355158054cd1..acef90efa1d64 100644
--- a/en/stable/base/sort/index.html
+++ b/en/stable/base/sort/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Sorting and Related Functions

Sorting and Related Functions

Julia has an extensive, flexible API for sorting and interacting with already-sorted arrays of values. By default, Julia picks reasonable algorithms and sorts in standard ascending order:

julia> sort([2,3,1])
+

Sorting and Related Functions

Sorting and Related Functions

Julia has an extensive, flexible API for sorting and interacting with already-sorted arrays of values. By default, Julia picks reasonable algorithms and sorts in standard ascending order:

julia> sort([2,3,1])
 3-element Array{Int64,1}:
  1
  2
diff --git a/en/stable/base/stacktraces/index.html b/en/stable/base/stacktraces/index.html
index d2f8f5c5af1d7..1935dff1bd3a2 100644
--- a/en/stable/base/stacktraces/index.html
+++ b/en/stable/base/stacktraces/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

StackTraces

StackTraces

StackFrame

Stack information representing execution context, with the following fields:

  • func::Symbol

    The name of the function containing the execution context.

  • linfo::Union{Core.MethodInstance, CodeInfo, Nothing}

    The MethodInstance containing the execution context (if it could be found).

  • file::Symbol

    The path to the file containing the execution context.

  • line::Int

    The line number in the file containing the execution context.

  • from_c::Bool

    True if the code is from C.

  • inlined::Bool

    True if the code is from an inlined frame.

  • pointer::UInt64

    Representation of the pointer to the execution context as returned by backtrace.

source
StackTrace

An alias for Vector{StackFrame} provided for convenience; returned by calls to stacktrace.

source
stacktrace([trace::Vector{Ptr{Cvoid}},] [c_funcs::Bool=false]) -> StackTrace

Returns a stack trace in the form of a vector of StackFrames. (By default stacktrace doesn't return C functions, but this can be enabled.) When called without specifying a trace, stacktrace first calls backtrace.

source

The following methods and types in Base.StackTraces are not exported and need to be called e.g. as StackTraces.lookup(ptr).

lookup(pointer::Union{Ptr{Cvoid}, UInt}) -> Vector{StackFrame}

Given a pointer to an execution context (usually generated by a call to backtrace), looks up stack frame context information. Returns an array of frame information for all functions inlined at that point, innermost function first.

source
remove_frames!(stack::StackTrace, name::Symbol)

Takes a StackTrace (a vector of StackFrames) and a function name (a Symbol) and removes the StackFrame specified by the function name from the StackTrace (also removing all frames above the specified function). Primarily used to remove StackTraces functions from the StackTrace prior to returning it.

source
remove_frames!(stack::StackTrace, m::Module)

Returns the StackTrace with all StackFrames from the provided Module removed.

source
+

StackTraces

StackTraces

StackFrame

Stack information representing execution context, with the following fields:

  • func::Symbol

    The name of the function containing the execution context.

  • linfo::Union{Core.MethodInstance, CodeInfo, Nothing}

    The MethodInstance containing the execution context (if it could be found).

  • file::Symbol

    The path to the file containing the execution context.

  • line::Int

    The line number in the file containing the execution context.

  • from_c::Bool

    True if the code is from C.

  • inlined::Bool

    True if the code is from an inlined frame.

  • pointer::UInt64

    Representation of the pointer to the execution context as returned by backtrace.

source
StackTrace

An alias for Vector{StackFrame} provided for convenience; returned by calls to stacktrace.

source
stacktrace([trace::Vector{Ptr{Cvoid}},] [c_funcs::Bool=false]) -> StackTrace

Returns a stack trace in the form of a vector of StackFrames. (By default stacktrace doesn't return C functions, but this can be enabled.) When called without specifying a trace, stacktrace first calls backtrace.

source

The following methods and types in Base.StackTraces are not exported and need to be called e.g. as StackTraces.lookup(ptr).

lookup(pointer::Union{Ptr{Cvoid}, UInt}) -> Vector{StackFrame}

Given a pointer to an execution context (usually generated by a call to backtrace), looks up stack frame context information. Returns an array of frame information for all functions inlined at that point, innermost function first.

source
remove_frames!(stack::StackTrace, name::Symbol)

Takes a StackTrace (a vector of StackFrames) and a function name (a Symbol) and removes the StackFrame specified by the function name from the StackTrace (also removing all frames above the specified function). Primarily used to remove StackTraces functions from the StackTrace prior to returning it.

source
remove_frames!(stack::StackTrace, m::Module)

Returns the StackTrace with all StackFrames from the provided Module removed.

source
diff --git a/en/stable/base/strings/index.html b/en/stable/base/strings/index.html index f33612635addc..deb6a37928b5b 100644 --- a/en/stable/base/strings/index.html +++ b/en/stable/base/strings/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Strings

Strings

The AbstractChar type is the supertype of all character implementations in Julia. A character represents a Unicode code point, and can be converted to an integer via the codepoint function in order to obtain the numerical value of the code point, or constructed from the same integer. These numerical values determine how characters are compared with < and ==, for example. New T <: AbstractChar types should define a codepoint(::T) method and a T(::UInt32) constructor, at minimum.

A given AbstractChar subtype may be capable of representing only a subset of Unicode, in which case conversion from an unsupported UInt32 value may throw an error. Conversely, the built-in Char type represents a superset of Unicode (in order to losslessly encode invalid byte streams), in which case conversion of a non-Unicode value to UInt32 throws an error. The isvalid function can be used to check which codepoints are representable in a given AbstractChar type.

Internally, an AbstractChar type may use a variety of encodings. Conversion via codepoint(char) will not reveal this encoding because it always returns the Unicode value of the character. print(io, c) of any c::AbstractChar produces an encoding determined by io (UTF-8 for all built-in IO types), via conversion to Char if necessary.

write(io, c), in contrast, may emit an encoding depending on typeof(c), and read(io, typeof(c)) should read the same encoding as write. New AbstractChar types must provide their own implementations of write and read.

source
Core.CharType.
Char(c::Union{Number,AbstractChar})

Char is a 32-bit AbstractChar type that is the default representation of characters in Julia. Char is the type used for character literals like 'x' and it is also the element type of String.

In order to losslessly represent arbitrary byte streams stored in a String, a Char value may store information that cannot be converted to a Unicode codepoint — converting such a Char to UInt32 will throw an error. The isvalid(c::Char) function can be used to query whether c represents a valid Unicode character.

source
Base.codepointFunction.
codepoint(c::AbstractChar)

Return the Unicode codepoint (an unsigned integer) corresponding to the character c (or throw an exception if c does not represent a valid character). For Char, this is a UInt32 value, but AbstractChar types that represent only a subset of Unicode may return a different-sized integer (e.g. UInt8).

source
Base.lengthMethod.
length(s::AbstractString) -> Int
+

Strings

Strings

The AbstractChar type is the supertype of all character implementations in Julia. A character represents a Unicode code point, and can be converted to an integer via the codepoint function in order to obtain the numerical value of the code point, or constructed from the same integer. These numerical values determine how characters are compared with < and ==, for example. New T <: AbstractChar types should define a codepoint(::T) method and a T(::UInt32) constructor, at minimum.

A given AbstractChar subtype may be capable of representing only a subset of Unicode, in which case conversion from an unsupported UInt32 value may throw an error. Conversely, the built-in Char type represents a superset of Unicode (in order to losslessly encode invalid byte streams), in which case conversion of a non-Unicode value to UInt32 throws an error. The isvalid function can be used to check which codepoints are representable in a given AbstractChar type.

Internally, an AbstractChar type may use a variety of encodings. Conversion via codepoint(char) will not reveal this encoding because it always returns the Unicode value of the character. print(io, c) of any c::AbstractChar produces an encoding determined by io (UTF-8 for all built-in IO types), via conversion to Char if necessary.

write(io, c), in contrast, may emit an encoding depending on typeof(c), and read(io, typeof(c)) should read the same encoding as write. New AbstractChar types must provide their own implementations of write and read.

source
Core.CharType.
Char(c::Union{Number,AbstractChar})

Char is a 32-bit AbstractChar type that is the default representation of characters in Julia. Char is the type used for character literals like 'x' and it is also the element type of String.

In order to losslessly represent arbitrary byte streams stored in a String, a Char value may store information that cannot be converted to a Unicode codepoint — converting such a Char to UInt32 will throw an error. The isvalid(c::Char) function can be used to query whether c represents a valid Unicode character.

source
Base.codepointFunction.
codepoint(c::AbstractChar)

Return the Unicode codepoint (an unsigned integer) corresponding to the character c (or throw an exception if c does not represent a valid character). For Char, this is a UInt32 value, but AbstractChar types that represent only a subset of Unicode may return a different-sized integer (e.g. UInt8).

source
Base.lengthMethod.
length(s::AbstractString) -> Int
 length(s::AbstractString, i::Integer, j::Integer) -> Int

The number of characters in string s from indices i through j. This is computed as the number of code unit indices from i to j which are valid character indices. Without only a single string argument, this computes the number of characters in the entire string. With i and j arguments it computes the number of indices between i and j inclusive that are valid indices in the string s. In addition to in-bounds values, i may take the out-of-bounds value ncodeunits(s) + 1 and j may take the out-of-bounds value 0.

See also: isvalid, ncodeunits, lastindex, thisind, nextind, prevind

Examples

julia> length("jμΛIα")
 5
source
Base.sizeofMethod.
sizeof(str::AbstractString)

Size, in bytes, of the string s. Equal to the number of code units in s multiplied by the size, in bytes, of one code unit in s.

Examples

julia> sizeof("")
 0
diff --git a/en/stable/devdocs/ast/index.html b/en/stable/devdocs/ast/index.html
index 69f09e52c8d03..f2d551f728c26 100644
--- a/en/stable/devdocs/ast/index.html
+++ b/en/stable/devdocs/ast/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Julia ASTs

Julia ASTs

Julia has two representations of code. First there is a surface syntax AST returned by the parser (e.g. the Meta.parse function), and manipulated by macros. It is a structured representation of code as it is written, constructed by julia-parser.scm from a character stream. Next there is a lowered form, or IR (intermediate representation), which is used by type inference and code generation. In the lowered form there are fewer types of nodes, all macros are expanded, and all control flow is converted to explicit branches and sequences of statements. The lowered form is constructed by julia-syntax.scm.

First we will focus on the lowered form, since it is more important to the compiler. It is also less obvious to the human, since it results from a significant rearrangement of the input syntax.

Lowered form

The following data types exist in lowered form:

  • Expr

    Has a node type indicated by the head field, and an args field which is a Vector{Any} of subexpressions. While almost every part of a surface AST is represented by an Expr, the IR uses only a limited number of Exprs, mostly for calls, conditional branches (gotoifnot), and returns.

  • Slot

    Identifies arguments and local variables by consecutive numbering. Slot is an abstract type with subtypes SlotNumber and TypedSlot. Both types have an integer-valued id field giving the slot index. Most slots have the same type at all uses, and so are represented with SlotNumber. The types of these slots are found in the slottypes field of their MethodInstance object. Slots that require per-use type annotations are represented with TypedSlot, which has a typ field.

  • CodeInfo

    Wraps the IR of a method. Its code field is an array of expressions to execute.

  • GotoNode

    Unconditional branch. The argument is the branch target, represented as an index in the code array to jump to.

  • QuoteNode

    Wraps an arbitrary value to reference as data. For example, the function f() = :a contains a QuoteNode whose value field is the symbol a, in order to return the symbol itself instead of evaluating it.

  • GlobalRef

    Refers to global variable name in module mod.

  • SSAValue

    Refers to a consecutively-numbered (starting at 1) static single assignment (SSA) variable inserted by the compiler. The number (id) of an SSAValue is the code array index of the expression whose value it represents.

  • NewvarNode

    Marks a point where a variable (slot) is created. This has the effect of resetting a variable to undefined.

Expr types

These symbols appear in the head field of Exprs in lowered form.

  • call

    Function call (dynamic dispatch). args[1] is the function to call, args[2:end] are the arguments.

  • invoke

    Function call (static dispatch). args[1] is the MethodInstance to call, args[2:end] are the arguments (including the function that is being called, at args[2]).

  • static_parameter

    Reference a static parameter by index.

  • gotoifnot

    Conditional branch. If args[1] is false, goes to the index identified in args[2].

  • =

    Assignment. In the IR, the first argument is always a Slot or a GlobalRef.

  • method

    Adds a method to a generic function and assigns the result if necessary.

    Has a 1-argument form and a 4-argument form. The 1-argument form arises from the syntax function foo end. In the 1-argument form, the argument is a symbol. If this symbol already names a function in the current scope, nothing happens. If the symbol is undefined, a new function is created and assigned to the identifier specified by the symbol. If the symbol is defined but names a non-function, an error is raised. The definition of "names a function" is that the binding is constant, and refers to an object of singleton type. The rationale for this is that an instance of a singleton type uniquely identifies the type to add the method to. When the type has fields, it wouldn't be clear whether the method was being added to the instance or its type.

    The 4-argument form has the following arguments:

    • args[1]

      A function name, or false if unknown. If a symbol, then the expression first behaves like the 1-argument form above. This argument is ignored from then on. When this is false, it means a method is being added strictly by type, (::T)(x) = x.

    • args[2]

      A SimpleVector of argument type data. args[2][1] is a SimpleVector of the argument types, and args[2][2] is a SimpleVector of type variables corresponding to the method's static parameters.

    • args[3]

      A CodeInfo of the method itself. For "out of scope" method definitions (adding a method to a function that also has methods defined in different scopes) this is an expression that evaluates to a :lambda expression.

    • args[4]

      true or false, identifying whether the method is staged (@generated function).

  • const

    Declares a (global) variable as constant.

  • null

    Has no arguments; simply yields the value nothing.

  • new

    Allocates a new struct-like object. First argument is the type. The new pseudo-function is lowered to this, and the type is always inserted by the compiler. This is very much an internal-only feature, and does no checking. Evaluating arbitrary new expressions can easily segfault.

  • return

    Returns its argument as the value of the enclosing function.

  • the_exception

    Yields the caught exception inside a catch block. This is the value of the run time system variable jl_exception_in_transit.

  • enter

    Enters an exception handler (setjmp). args[1] is the label of the catch block to jump to on error.

  • leave

    Pop exception handlers. args[1] is the number of handlers to pop.

  • inbounds

    Controls turning bounds checks on or off. A stack is maintained; if the first argument of this expression is true or false (true means bounds checks are disabled), it is pushed onto the stack. If the first argument is :pop, the stack is popped.

  • boundscheck

    Has the value false if inlined into a section of code marked with @inbounds, otherwise has the value true.

  • copyast

    Part of the implementation of quasi-quote. The argument is a surface syntax AST that is simply copied recursively and returned at run time.

  • meta

    Metadata. args[1] is typically a symbol specifying the kind of metadata, and the rest of the arguments are free-form. The following kinds of metadata are commonly used:

    • :inline and :noinline: Inlining hints.

Method

A unique'd container describing the shared metadata for a single method.

  • name, module, file, line, sig

    Metadata to uniquely identify the method for the computer and the human.

  • ambig

    Cache of other methods that may be ambiguous with this one.

  • specializations

    Cache of all MethodInstance ever created for this Method, used to ensure uniqueness. Uniqueness is required for efficiency, especially for incremental precompile and tracking of method invalidation.

  • source

    The original source code (usually compressed).

  • roots

    Pointers to non-AST things that have been interpolated into the AST, required by compression of the AST, type-inference, or the generation of native code.

  • nargs, isva, called, isstaged, pure

    Descriptive bit-fields for the source code of this Method.

  • min_world / max_world

    The range of world ages for which this method is visible to dispatch.

MethodInstance

A unique'd container describing a single callable signature for a Method. See especially Proper maintenance and care of multi-threading locks for important details on how to modify these fields safely.

  • specTypes

    The primary key for this MethodInstance. Uniqueness is guaranteed through a def.specializations lookup.

  • def

    The Method that this function describes a specialization of. Or a Module, if this is a top-level Lambda expanded in Module, and which is not part of a Method.

  • sparam_vals

    The values of the static parameters in specTypes indexed by def.sparam_syms. For the MethodInstance at Method.unspecialized, this is the empty SimpleVector. But for a runtime MethodInstance from the MethodTable cache, this will always be defined and indexable.

  • rettype

    The inferred return type for the specFunctionObject field, which (in most cases) is also the computed return type for the function in general.

  • inferred

    May contain a cache of the inferred source for this function, or other information about the inference result such as a constant return value may be put here (if jlcall_api == 2), or it could be set to nothing to just indicate rettype is inferred.

  • ftpr

    The generic jlcall entry point.

  • jlcall_api

    The ABI to use when calling fptr. Some significant ones include:

    • 0 - Not compiled yet
    • 1 - JLCALLABLE `jlvaluet ()(jlfunctiont *f, jlvaluet *args[nargs], uint32t nargs)`
    • 2 - Constant (value stored in inferred)
    • 3 - With Static-parameters forwarded jl_value_t *(*)(jl_svec_t *sparams, jl_function_t *f, jl_value_t *args[nargs], uint32_t nargs)
    • 4 - Run in interpreter jl_value_t *(*)(jl_method_instance_t *meth, jl_function_t *f, jl_value_t *args[nargs], uint32_t nargs)
  • min_world / max_world

    The range of world ages for which this method instance is valid to be called.

CodeInfo

A temporary container for holding lowered source code.

  • code

    An Any array of statements

  • slotnames

    An array of symbols giving the name of each slot (argument or local variable).

  • slottypes

    An array of types for the slots.

  • slotflags

    A UInt8 array of slot properties, represented as bit flags:

    • 2 - assigned (only false if there are no assignment statements with this var on the left)
    • 8 - const (currently unused for local variables)
    • 16 - statically assigned once
    • 32 - might be used before assigned. This flag is only valid after type inference.
  • ssavaluetypes

    Either an array or an Int.

    If an Int, it gives the number of compiler-inserted temporary locations in the function. If an array, specifies a type for each location.

  • linetable

    An array of source location objects

  • codelocs

    An array of integer indices into the linetable, giving the location associated with each statement.

Boolean properties:

  • inferred

    Whether this has been produced by type inference.

  • inlineable

    Whether this should be inlined.

  • propagate_inbounds

    Whether this should should propagate @inbounds when inlined for the purpose of eliding @boundscheck blocks.

  • pure

    Whether this is known to be a pure function of its arguments, without respect to the state of the method caches or other mutable global state.

Surface syntax AST

Front end ASTs consist almost entirely of Exprs and atoms (e.g. symbols, numbers). There is generally a different expression head for each visually distinct syntactic form. Examples will be given in s-expression syntax. Each parenthesized list corresponds to an Expr, where the first element is the head. For example (call f x) corresponds to Expr(:call, :f, :x) in Julia.

Calls

InputAST
f(x)(call f x)
f(x, y=1, z=2)(call f x (kw y 1) (kw z 2))
f(x; y=1)(call f (parameters (kw y 1)) x)
f(x...)(call f (... x))

do syntax:

f(x) do a,b
+

Julia ASTs

Julia ASTs

Julia has two representations of code. First there is a surface syntax AST returned by the parser (e.g. the Meta.parse function), and manipulated by macros. It is a structured representation of code as it is written, constructed by julia-parser.scm from a character stream. Next there is a lowered form, or IR (intermediate representation), which is used by type inference and code generation. In the lowered form there are fewer types of nodes, all macros are expanded, and all control flow is converted to explicit branches and sequences of statements. The lowered form is constructed by julia-syntax.scm.

First we will focus on the lowered form, since it is more important to the compiler. It is also less obvious to the human, since it results from a significant rearrangement of the input syntax.

Lowered form

The following data types exist in lowered form:

  • Expr

    Has a node type indicated by the head field, and an args field which is a Vector{Any} of subexpressions. While almost every part of a surface AST is represented by an Expr, the IR uses only a limited number of Exprs, mostly for calls, conditional branches (gotoifnot), and returns.

  • Slot

    Identifies arguments and local variables by consecutive numbering. Slot is an abstract type with subtypes SlotNumber and TypedSlot. Both types have an integer-valued id field giving the slot index. Most slots have the same type at all uses, and so are represented with SlotNumber. The types of these slots are found in the slottypes field of their MethodInstance object. Slots that require per-use type annotations are represented with TypedSlot, which has a typ field.

  • CodeInfo

    Wraps the IR of a method. Its code field is an array of expressions to execute.

  • GotoNode

    Unconditional branch. The argument is the branch target, represented as an index in the code array to jump to.

  • QuoteNode

    Wraps an arbitrary value to reference as data. For example, the function f() = :a contains a QuoteNode whose value field is the symbol a, in order to return the symbol itself instead of evaluating it.

  • GlobalRef

    Refers to global variable name in module mod.

  • SSAValue

    Refers to a consecutively-numbered (starting at 1) static single assignment (SSA) variable inserted by the compiler. The number (id) of an SSAValue is the code array index of the expression whose value it represents.

  • NewvarNode

    Marks a point where a variable (slot) is created. This has the effect of resetting a variable to undefined.

Expr types

These symbols appear in the head field of Exprs in lowered form.

  • call

    Function call (dynamic dispatch). args[1] is the function to call, args[2:end] are the arguments.

  • invoke

    Function call (static dispatch). args[1] is the MethodInstance to call, args[2:end] are the arguments (including the function that is being called, at args[2]).

  • static_parameter

    Reference a static parameter by index.

  • gotoifnot

    Conditional branch. If args[1] is false, goes to the index identified in args[2].

  • =

    Assignment. In the IR, the first argument is always a Slot or a GlobalRef.

  • method

    Adds a method to a generic function and assigns the result if necessary.

    Has a 1-argument form and a 4-argument form. The 1-argument form arises from the syntax function foo end. In the 1-argument form, the argument is a symbol. If this symbol already names a function in the current scope, nothing happens. If the symbol is undefined, a new function is created and assigned to the identifier specified by the symbol. If the symbol is defined but names a non-function, an error is raised. The definition of "names a function" is that the binding is constant, and refers to an object of singleton type. The rationale for this is that an instance of a singleton type uniquely identifies the type to add the method to. When the type has fields, it wouldn't be clear whether the method was being added to the instance or its type.

    The 4-argument form has the following arguments:

    • args[1]

      A function name, or false if unknown. If a symbol, then the expression first behaves like the 1-argument form above. This argument is ignored from then on. When this is false, it means a method is being added strictly by type, (::T)(x) = x.

    • args[2]

      A SimpleVector of argument type data. args[2][1] is a SimpleVector of the argument types, and args[2][2] is a SimpleVector of type variables corresponding to the method's static parameters.

    • args[3]

      A CodeInfo of the method itself. For "out of scope" method definitions (adding a method to a function that also has methods defined in different scopes) this is an expression that evaluates to a :lambda expression.

    • args[4]

      true or false, identifying whether the method is staged (@generated function).

  • const

    Declares a (global) variable as constant.

  • null

    Has no arguments; simply yields the value nothing.

  • new

    Allocates a new struct-like object. First argument is the type. The new pseudo-function is lowered to this, and the type is always inserted by the compiler. This is very much an internal-only feature, and does no checking. Evaluating arbitrary new expressions can easily segfault.

  • return

    Returns its argument as the value of the enclosing function.

  • the_exception

    Yields the caught exception inside a catch block. This is the value of the run time system variable jl_exception_in_transit.

  • enter

    Enters an exception handler (setjmp). args[1] is the label of the catch block to jump to on error.

  • leave

    Pop exception handlers. args[1] is the number of handlers to pop.

  • inbounds

    Controls turning bounds checks on or off. A stack is maintained; if the first argument of this expression is true or false (true means bounds checks are disabled), it is pushed onto the stack. If the first argument is :pop, the stack is popped.

  • boundscheck

    Has the value false if inlined into a section of code marked with @inbounds, otherwise has the value true.

  • copyast

    Part of the implementation of quasi-quote. The argument is a surface syntax AST that is simply copied recursively and returned at run time.

  • meta

    Metadata. args[1] is typically a symbol specifying the kind of metadata, and the rest of the arguments are free-form. The following kinds of metadata are commonly used:

    • :inline and :noinline: Inlining hints.

Method

A unique'd container describing the shared metadata for a single method.

  • name, module, file, line, sig

    Metadata to uniquely identify the method for the computer and the human.

  • ambig

    Cache of other methods that may be ambiguous with this one.

  • specializations

    Cache of all MethodInstance ever created for this Method, used to ensure uniqueness. Uniqueness is required for efficiency, especially for incremental precompile and tracking of method invalidation.

  • source

    The original source code (usually compressed).

  • roots

    Pointers to non-AST things that have been interpolated into the AST, required by compression of the AST, type-inference, or the generation of native code.

  • nargs, isva, called, isstaged, pure

    Descriptive bit-fields for the source code of this Method.

  • min_world / max_world

    The range of world ages for which this method is visible to dispatch.

MethodInstance

A unique'd container describing a single callable signature for a Method. See especially Proper maintenance and care of multi-threading locks for important details on how to modify these fields safely.

  • specTypes

    The primary key for this MethodInstance. Uniqueness is guaranteed through a def.specializations lookup.

  • def

    The Method that this function describes a specialization of. Or a Module, if this is a top-level Lambda expanded in Module, and which is not part of a Method.

  • sparam_vals

    The values of the static parameters in specTypes indexed by def.sparam_syms. For the MethodInstance at Method.unspecialized, this is the empty SimpleVector. But for a runtime MethodInstance from the MethodTable cache, this will always be defined and indexable.

  • rettype

    The inferred return type for the specFunctionObject field, which (in most cases) is also the computed return type for the function in general.

  • inferred

    May contain a cache of the inferred source for this function, or other information about the inference result such as a constant return value may be put here (if jlcall_api == 2), or it could be set to nothing to just indicate rettype is inferred.

  • ftpr

    The generic jlcall entry point.

  • jlcall_api

    The ABI to use when calling fptr. Some significant ones include:

    • 0 - Not compiled yet
    • 1 - JLCALLABLE `jlvaluet ()(jlfunctiont *f, jlvaluet *args[nargs], uint32t nargs)`
    • 2 - Constant (value stored in inferred)
    • 3 - With Static-parameters forwarded jl_value_t *(*)(jl_svec_t *sparams, jl_function_t *f, jl_value_t *args[nargs], uint32_t nargs)
    • 4 - Run in interpreter jl_value_t *(*)(jl_method_instance_t *meth, jl_function_t *f, jl_value_t *args[nargs], uint32_t nargs)
  • min_world / max_world

    The range of world ages for which this method instance is valid to be called.

CodeInfo

A temporary container for holding lowered source code.

  • code

    An Any array of statements

  • slotnames

    An array of symbols giving the name of each slot (argument or local variable).

  • slottypes

    An array of types for the slots.

  • slotflags

    A UInt8 array of slot properties, represented as bit flags:

    • 2 - assigned (only false if there are no assignment statements with this var on the left)
    • 8 - const (currently unused for local variables)
    • 16 - statically assigned once
    • 32 - might be used before assigned. This flag is only valid after type inference.
  • ssavaluetypes

    Either an array or an Int.

    If an Int, it gives the number of compiler-inserted temporary locations in the function. If an array, specifies a type for each location.

  • linetable

    An array of source location objects

  • codelocs

    An array of integer indices into the linetable, giving the location associated with each statement.

Boolean properties:

  • inferred

    Whether this has been produced by type inference.

  • inlineable

    Whether this should be inlined.

  • propagate_inbounds

    Whether this should should propagate @inbounds when inlined for the purpose of eliding @boundscheck blocks.

  • pure

    Whether this is known to be a pure function of its arguments, without respect to the state of the method caches or other mutable global state.

Surface syntax AST

Front end ASTs consist almost entirely of Exprs and atoms (e.g. symbols, numbers). There is generally a different expression head for each visually distinct syntactic form. Examples will be given in s-expression syntax. Each parenthesized list corresponds to an Expr, where the first element is the head. For example (call f x) corresponds to Expr(:call, :f, :x) in Julia.

Calls

InputAST
f(x)(call f x)
f(x, y=1, z=2)(call f x (kw y 1) (kw z 2))
f(x; y=1)(call f (parameters (kw y 1)) x)
f(x...)(call f (... x))

do syntax:

f(x) do a,b
     body
 end

parses as (do (call f x) (-> (tuple a b) (block body))).

Operators

Most uses of operators are just function calls, so they are parsed with the head call. However some operators are special forms (not necessarily function calls), and in those cases the operator itself is the expression head. In julia-parser.scm these are referred to as "syntactic operators". Some operators (+ and *) use N-ary parsing; chained calls are parsed as a single N-argument call. Finally, chains of comparisons have their own special expression structure.

InputAST
x+y(call + x y)
a+b+c+d(call + a b c d)
2x(call * 2 x)
a&&b(&& a b)
x += 1(+= x 1)
a ? 1 : 2(if a 1 2)
a:b(: a b)
a:b:c(: a b c)
a,b(tuple a b)
a==b(call == a b)
1<i<=n(comparison 1 < i <= n)
a.b(. a (quote b))
a.(b)(. a b)

Bracketed forms

InputAST
a[i](ref a i)
t[i;j](typed_vcat t i j)
t[i j](typed_hcat t i j)
t[a b; c d](typed_vcat t (row a b) (row c d))
a{b}(curly a b)
a{b;c}(curly a (parameters c) b)
[x](vect x)
[x,y](vect x y)
[x;y](vcat x y)
[x y](hcat x y)
[x y; z t](vcat (row x y) (row z t))
[x for y in z, a in b](comprehension x (= y z) (= a b))
T[x for y in z](typed_comprehension T x (= y z))
(a, b, c)(tuple a b c)
(a; b; c)(block a (block b c))

Macros

InputAST
@m x y(macrocall @m (line) x y)
Base.@m x y(macrocall (. Base (quote @m)) (line) x y)
@Base.m x y(macrocall (. Base (quote @m)) (line) x y)

Strings

InputAST
"a""a"
x"y"(macrocall @x_str (line) "y")
x"y"z(macrocall @x_str (line) "y" "z")
"x = $x"(string "x = " x)
`a b c`(macrocall @cmd (line) "a b c")

Doc string syntax:

"some docs"
 f(x) = x

parses as (macrocall (|.| Core '@doc) (line) "some docs" (= (call f x) (block x))).

Imports and such

InputAST
import a(import (. a))
import a.b.c(import (. a b c))
import ...a(import (. . . . a))
import a.b, c.d(import (. a b) (. c d))
import Base: x(import (: (. Base) (. x)))
import Base: x, y(import (: (. Base) (. x) (. y)))
export a, b(export a b)

Numbers

Julia supports more number types than many scheme implementations, so not all numbers are represented directly as scheme numbers in the AST.

InputAST
11111111111111111111(macrocall @int128_str (null) "11111111111111111111")
0xfffffffffffffffff(macrocall @uint128_str (null) "0xfffffffffffffffff")
1111...many digits...(macrocall @big_str (null) "1111....")

Block forms

A block of statements is parsed as (block stmt1 stmt2 ...).

If statement:

if a
diff --git a/en/stable/devdocs/backtraces/index.html b/en/stable/devdocs/backtraces/index.html
index 27901200acd61..a37a085cc295e 100644
--- a/en/stable/devdocs/backtraces/index.html
+++ b/en/stable/devdocs/backtraces/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Reporting and analyzing crashes (segfaults)

Reporting and analyzing crashes (segfaults)

So you managed to break Julia. Congratulations! Collected here are some general procedures you can undergo for common symptoms encountered when something goes awry. Including the information from these debugging steps can greatly help the maintainers when tracking down a segfault or trying to figure out why your script is running slower than expected.

If you've been directed to this page, find the symptom that best matches what you're experiencing and follow the instructions to generate the debugging information requested. Table of symptoms:

Version/Environment info

No matter the error, we will always need to know what version of Julia you are running. When Julia first starts up, a header is printed out with a version number and date. Please also include the output of versioninfo() in any report you create:

julia> versioninfo()
+

Reporting and analyzing crashes (segfaults)

Reporting and analyzing crashes (segfaults)

So you managed to break Julia. Congratulations! Collected here are some general procedures you can undergo for common symptoms encountered when something goes awry. Including the information from these debugging steps can greatly help the maintainers when tracking down a segfault or trying to figure out why your script is running slower than expected.

If you've been directed to this page, find the symptom that best matches what you're experiencing and follow the instructions to generate the debugging information requested. Table of symptoms:

Version/Environment info

No matter the error, we will always need to know what version of Julia you are running. When Julia first starts up, a header is printed out with a version number and date. Please also include the output of versioninfo() in any report you create:

julia> versioninfo()
 ERROR: UndefVarError: versioninfo not defined

Segfaults during bootstrap (sysimg.jl)

Segfaults toward the end of the make process of building Julia are a common symptom of something going wrong while Julia is preparsing the corpus of code in the base/ folder. Many factors can contribute toward this process dying unexpectedly, however it is as often as not due to an error in the C-code portion of Julia, and as such must typically be debugged with a debug build inside of gdb. Explicitly:

Create a debug build of Julia:

$ cd <julia_root>
 $ make debug

Note that this process will likely fail with the same error as a normal make incantation, however this will create a debug executable that will offer gdb the debugging symbols needed to get accurate backtraces. Next, manually run the bootstrap process inside of gdb:

$ cd base/
 $ gdb -x ../contrib/debug_bootstrap.gdb

This will start gdb, attempt to run the bootstrap process using the debug build of Julia, and print out a backtrace if (when) it segfaults. You may need to hit <enter> a few times to get the full backtrace. Create a gist with the backtrace, the version info, and any other pertinent information you can think of and open a new issue on Github with a link to the gist.

Segfaults when running a script

The procedure is very similar to Segfaults during bootstrap (sysimg.jl). Create a debug build of Julia, and run your script inside of a debugged Julia process:

$ cd <julia_root>
diff --git a/en/stable/devdocs/boundscheck/index.html b/en/stable/devdocs/boundscheck/index.html
index d731118eee996..5d613f8cc4c68 100644
--- a/en/stable/devdocs/boundscheck/index.html
+++ b/en/stable/devdocs/boundscheck/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Bounds checking

Bounds checking

Like many modern programming languages, Julia uses bounds checking to ensure program safety when accessing arrays. In tight inner loops or other performance critical situations, you may wish to skip these bounds checks to improve runtime performance. For instance, in order to emit vectorized (SIMD) instructions, your loop body cannot contain branches, and thus cannot contain bounds checks. Consequently, Julia includes an @inbounds(...) macro to tell the compiler to skip such bounds checks within the given block. User-defined array types can use the @boundscheck(...) macro to achieve context-sensitive code selection.

Eliding bounds checks

The @boundscheck(...) macro marks blocks of code that perform bounds checking. When such blocks are inlined into an @inbounds(...) block, the compiler may remove these blocks. The compiler removes the @boundscheck block only if it is inlined into the calling function. For example, you might write the method sum as:

function sum(A::AbstractArray)
+

Bounds checking

Bounds checking

Like many modern programming languages, Julia uses bounds checking to ensure program safety when accessing arrays. In tight inner loops or other performance critical situations, you may wish to skip these bounds checks to improve runtime performance. For instance, in order to emit vectorized (SIMD) instructions, your loop body cannot contain branches, and thus cannot contain bounds checks. Consequently, Julia includes an @inbounds(...) macro to tell the compiler to skip such bounds checks within the given block. User-defined array types can use the @boundscheck(...) macro to achieve context-sensitive code selection.

Eliding bounds checks

The @boundscheck(...) macro marks blocks of code that perform bounds checking. When such blocks are inlined into an @inbounds(...) block, the compiler may remove these blocks. The compiler removes the @boundscheck block only if it is inlined into the calling function. For example, you might write the method sum as:

function sum(A::AbstractArray)
     r = zero(eltype(A))
     for i = 1:length(A)
         @inbounds r += A[i]
diff --git a/en/stable/devdocs/callconv/index.html b/en/stable/devdocs/callconv/index.html
index 4d9bdfd803e3f..6d5137c33e020 100644
--- a/en/stable/devdocs/callconv/index.html
+++ b/en/stable/devdocs/callconv/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Calling Conventions

Calling Conventions

Julia uses three calling conventions for four distinct purposes:

NamePrefixPurpose
Nativejulia_Speed via specialized signatures
JL Calljlcall_Wrapper for generic calls
JL Calljl_Builtins
C ABIjlcapi_Wrapper callable from C

Julia Native Calling Convention

The native calling convention is designed for fast non-generic calls. It usually uses a specialized signature.

  • LLVM ghosts (zero-length types) are omitted.
  • LLVM scalars and vectors are passed by value.
  • LLVM aggregates (arrays and structs) are passed by reference.

A small return values is returned as LLVM return values. A large return values is returned via the "structure return" (sret) convention, where the caller provides a pointer to a return slot.

An argument or return values that is a homogeneous tuple is sometimes represented as an LLVM vector instead of an LLVM array.

JL Call Convention

The JL Call convention is for builtins and generic dispatch. Hand-written functions using this convention are declared via the macro JL_CALLABLE. The convention uses exactly 3 parameters:

  • F - Julia representation of function that is being applied
  • args - pointer to array of pointers to boxes
  • nargs - length of the array

The return value is a pointer to a box.

C ABI

C ABI wrappers enable calling Julia from C. The wrapper calls a function using the native calling convention.

Tuples are always represented as C arrays.

+

Calling Conventions

Calling Conventions

Julia uses three calling conventions for four distinct purposes:

NamePrefixPurpose
Nativejulia_Speed via specialized signatures
JL Calljlcall_Wrapper for generic calls
JL Calljl_Builtins
C ABIjlcapi_Wrapper callable from C

Julia Native Calling Convention

The native calling convention is designed for fast non-generic calls. It usually uses a specialized signature.

  • LLVM ghosts (zero-length types) are omitted.
  • LLVM scalars and vectors are passed by value.
  • LLVM aggregates (arrays and structs) are passed by reference.

A small return values is returned as LLVM return values. A large return values is returned via the "structure return" (sret) convention, where the caller provides a pointer to a return slot.

An argument or return values that is a homogeneous tuple is sometimes represented as an LLVM vector instead of an LLVM array.

JL Call Convention

The JL Call convention is for builtins and generic dispatch. Hand-written functions using this convention are declared via the macro JL_CALLABLE. The convention uses exactly 3 parameters:

  • F - Julia representation of function that is being applied
  • args - pointer to array of pointers to boxes
  • nargs - length of the array

The return value is a pointer to a box.

C ABI

C ABI wrappers enable calling Julia from C. The wrapper calls a function using the native calling convention.

Tuples are always represented as C arrays.

diff --git a/en/stable/devdocs/cartesian/index.html b/en/stable/devdocs/cartesian/index.html index 0004166d06850..f69f32be9cbfd 100644 --- a/en/stable/devdocs/cartesian/index.html +++ b/en/stable/devdocs/cartesian/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Base.Cartesian

Base.Cartesian

The (non-exported) Cartesian module provides macros that facilitate writing multidimensional algorithms. It is hoped that Cartesian will not, in the long term, be necessary; however, at present it is one of the few ways to write compact and performant multidimensional code.

Principles of usage

A simple example of usage is:

@nloops 3 i A begin
+

Base.Cartesian

Base.Cartesian

The (non-exported) Cartesian module provides macros that facilitate writing multidimensional algorithms. It is hoped that Cartesian will not, in the long term, be necessary; however, at present it is one of the few ways to write compact and performant multidimensional code.

Principles of usage

A simple example of usage is:

@nloops 3 i A begin
     s += @nref 3 A i
 end

which generates the following code:

for i_3 = 1:size(A,3)
     for i_2 = 1:size(A,2)
diff --git a/en/stable/devdocs/compiler/index.html b/en/stable/devdocs/compiler/index.html
index 301cb3c236642..be1f1cfe39dfb 100644
--- a/en/stable/devdocs/compiler/index.html
+++ b/en/stable/devdocs/compiler/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

High-level Overview of the Native-Code Generation Process

High-level Overview of the Native-Code Generation Process

Representation of Pointers

When emitting code to an object file, pointers will be emitted as relocations. The deserialization code will ensure any object that pointed to one of these constants gets recreated and contains the right runtime pointer.

Otherwise, they will be emitted as literal constants.

To emit one of these objects, call literal_pointer_val. It'll handle tracking the Julia value and the LLVM global, ensuring they are valid both for the current runtime and after deserialization.

When emitted into the object file, these globals are stored as references in a large gvals table. This allows the deserializer to reference them by index, and implement a custom manual mechanism similar to a Global Offset Table (GOT) to restore them.

Function pointers are handled similarly. They are stored as values in a large fvals table. Like globals, this allows the deserializer to reference them by index.

Note that extern functions are handled separately, with names, via the usual symbol resolution mechanism in the linker.

Note too that ccall functions are also handled separately, via a manual GOT and Procedure Linkage Table (PLT).

Representation of Intermediate Values

Values are passed around in a jl_cgval_t struct. This represents an R-value, and includes enough information to determine how to assign or pass it somewhere.

They are created via one of the helper constructors, usually: mark_julia_type (for immediate values) and mark_julia_slot (for pointers to values).

The function convert_julia_type can transform between any two types. It returns an R-value with cgval.typ set to typ. It'll cast the object to the requested representation, making heap boxes, allocating stack copies, and computing tagged unions as needed to change the representation.

By contrast update_julia_type will change cgval.typ to typ, only if it can be done at zero-cost (i.e. without emitting any code).

Union representation

Inferred union types may be stack allocated via a tagged type representation.

The primitive routines that need to be able to handle tagged unions are:

  • mark-type
  • load-local
  • store-local
  • isa
  • is
  • emit_typeof
  • emit_sizeof
  • boxed
  • unbox
  • specialized cc-ret

Everything else should be possible to handle in inference by using these primitives to implement union-splitting.

The representation of the tagged-union is as a pair of < void* union, byte selector >. The selector is fixed-size as byte & 0x7f, and will union-tag the first 126 isbits. It records the one-based depth-first count into the type-union of the isbits objects inside. An index of zero indicates that the union* is actually a tagged heap-allocated jl_value_t*, and needs to be treated as normal for a boxed object rather than as a tagged union.

The high bit of the selector (byte & 0x80) can be tested to determine if the void* is actually a heap-allocated (jl_value_t*) box, thus avoiding the cost of re-allocating a box, while maintaining the ability to efficiently handle union-splitting based on the low bits.

It is guaranteed that byte & 0x7f is an exact test for the type, if the value can be represented by a tag – it will never be marked byte = 0x80. It is not necessary to also test the type-tag when testing isa.

The union* memory region may be allocated at any size. The only constraint is that it is big enough to contain the data currently specified by selector. It might not be big enough to contain the union of all types that could be stored there according to the associated Union type field. Use appropriate care when copying.

Specialized Calling Convention Signature Representation

A jl_returninfo_t object describes the calling convention details of any callable.

If any of the arguments or return type of a method can be represented unboxed, and the method is not varargs, it'll be given an optimized calling convention signature based on its specTypes and rettype fields.

The general principles are that:

  • Primitive types get passed in int/float registers.
  • Tuples of VecElement types get passed in vector registers.
  • Structs get passed on the stack.
  • Return values are handle similarly to arguments, with a size-cutoff at which they will instead be returned via a hidden sret argument.

The total logic for this is implemented by get_specsig_function and deserves_sret.

Additionally, if the return type is a union, it may be returned as a pair of values (a pointer and a tag). If the union values can be stack-allocated, then sufficient space to store them will also be passed as a hidden first argument. It is up to the callee whether the returned pointer will point to this space, a boxed object, or even other constant memory.

+

High-level Overview of the Native-Code Generation Process

High-level Overview of the Native-Code Generation Process

Representation of Pointers

When emitting code to an object file, pointers will be emitted as relocations. The deserialization code will ensure any object that pointed to one of these constants gets recreated and contains the right runtime pointer.

Otherwise, they will be emitted as literal constants.

To emit one of these objects, call literal_pointer_val. It'll handle tracking the Julia value and the LLVM global, ensuring they are valid both for the current runtime and after deserialization.

When emitted into the object file, these globals are stored as references in a large gvals table. This allows the deserializer to reference them by index, and implement a custom manual mechanism similar to a Global Offset Table (GOT) to restore them.

Function pointers are handled similarly. They are stored as values in a large fvals table. Like globals, this allows the deserializer to reference them by index.

Note that extern functions are handled separately, with names, via the usual symbol resolution mechanism in the linker.

Note too that ccall functions are also handled separately, via a manual GOT and Procedure Linkage Table (PLT).

Representation of Intermediate Values

Values are passed around in a jl_cgval_t struct. This represents an R-value, and includes enough information to determine how to assign or pass it somewhere.

They are created via one of the helper constructors, usually: mark_julia_type (for immediate values) and mark_julia_slot (for pointers to values).

The function convert_julia_type can transform between any two types. It returns an R-value with cgval.typ set to typ. It'll cast the object to the requested representation, making heap boxes, allocating stack copies, and computing tagged unions as needed to change the representation.

By contrast update_julia_type will change cgval.typ to typ, only if it can be done at zero-cost (i.e. without emitting any code).

Union representation

Inferred union types may be stack allocated via a tagged type representation.

The primitive routines that need to be able to handle tagged unions are:

  • mark-type
  • load-local
  • store-local
  • isa
  • is
  • emit_typeof
  • emit_sizeof
  • boxed
  • unbox
  • specialized cc-ret

Everything else should be possible to handle in inference by using these primitives to implement union-splitting.

The representation of the tagged-union is as a pair of < void* union, byte selector >. The selector is fixed-size as byte & 0x7f, and will union-tag the first 126 isbits. It records the one-based depth-first count into the type-union of the isbits objects inside. An index of zero indicates that the union* is actually a tagged heap-allocated jl_value_t*, and needs to be treated as normal for a boxed object rather than as a tagged union.

The high bit of the selector (byte & 0x80) can be tested to determine if the void* is actually a heap-allocated (jl_value_t*) box, thus avoiding the cost of re-allocating a box, while maintaining the ability to efficiently handle union-splitting based on the low bits.

It is guaranteed that byte & 0x7f is an exact test for the type, if the value can be represented by a tag – it will never be marked byte = 0x80. It is not necessary to also test the type-tag when testing isa.

The union* memory region may be allocated at any size. The only constraint is that it is big enough to contain the data currently specified by selector. It might not be big enough to contain the union of all types that could be stored there according to the associated Union type field. Use appropriate care when copying.

Specialized Calling Convention Signature Representation

A jl_returninfo_t object describes the calling convention details of any callable.

If any of the arguments or return type of a method can be represented unboxed, and the method is not varargs, it'll be given an optimized calling convention signature based on its specTypes and rettype fields.

The general principles are that:

  • Primitive types get passed in int/float registers.
  • Tuples of VecElement types get passed in vector registers.
  • Structs get passed on the stack.
  • Return values are handle similarly to arguments, with a size-cutoff at which they will instead be returned via a hidden sret argument.

The total logic for this is implemented by get_specsig_function and deserves_sret.

Additionally, if the return type is a union, it may be returned as a pair of values (a pointer and a tag). If the union values can be stack-allocated, then sufficient space to store them will also be passed as a hidden first argument. It is up to the callee whether the returned pointer will point to this space, a boxed object, or even other constant memory.

diff --git a/en/stable/devdocs/debuggingtips/index.html b/en/stable/devdocs/debuggingtips/index.html index 0883fb4a5cca1..926363be472f8 100644 --- a/en/stable/devdocs/debuggingtips/index.html +++ b/en/stable/devdocs/debuggingtips/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

gdb debugging tips

gdb debugging tips

Displaying Julia variables

Within gdb, any jl_value_t* object obj can be displayed using

(gdb) call jl_(obj)

The object will be displayed in the julia session, not in the gdb session. This is a useful way to discover the types and values of objects being manipulated by Julia's C code.

Similarly, if you're debugging some of Julia's internals (e.g., compiler.jl), you can print obj using

ccall(:jl_, Cvoid, (Any,), obj)

This is a good way to circumvent problems that arise from the order in which julia's output streams are initialized.

Julia's flisp interpreter uses value_t objects; these can be displayed with call fl_print(fl_ctx, ios_stdout, obj).

Useful Julia variables for Inspecting

While the addresses of many variables, like singletons, can be be useful to print for many failures, there are a number of additional variables (see julia.h for a complete list) that are even more useful.

  • (when in jl_apply_generic) mfunc and jl_uncompress_ast(mfunc->def, mfunc->code) :: for figuring out a bit about the call-stack
  • jl_lineno and jl_filename :: for figuring out what line in a test to go start debugging from (or figure out how far into a file has been parsed)
  • $1 :: not really a variable, but still a useful shorthand for referring to the result of the last gdb command (such as print)
  • jl_options :: sometimes useful, since it lists all of the command line options that were successfully parsed
  • jl_uv_stderr :: because who doesn't like to be able to interact with stdio

Useful Julia functions for Inspecting those variables

  • jl_gdblookup($rip) :: For looking up the current function and line. (use $eip on i686 platforms)
  • jlbacktrace() :: For dumping the current Julia backtrace stack to stderr. Only usable after record_backtrace() has been called.
  • jl_dump_llvm_value(Value*) :: For invoking Value->dump() in gdb, where it doesn't work natively. For example, f->linfo->functionObject, f->linfo->specFunctionObject, and to_function(f->linfo).
  • Type->dump() :: only works in lldb. Note: add something like ;1 to prevent lldb from printing its prompt over the output
  • jl_eval_string("expr") :: for invoking side-effects to modify the current state or to lookup symbols
  • jl_typeof(jl_value_t*) :: for extracting the type tag of a Julia value (in gdb, call macro define jl_typeof jl_typeof first, or pick something short like ty for the first arg to define a shorthand)

Inserting breakpoints for inspection from gdb

In your gdb session, set a breakpoint in jl_breakpoint like so:

(gdb) break jl_breakpoint

Then within your Julia code, insert a call to jl_breakpoint by adding

ccall(:jl_breakpoint, Cvoid, (Any,), obj)

where obj can be any variable or tuple you want to be accessible in the breakpoint.

It's particularly helpful to back up to the jl_apply frame, from which you can display the arguments to a function using, e.g.,

(gdb) call jl_(args[0])

Another useful frame is to_function(jl_method_instance_t *li, bool cstyle). The jl_method_instance_t* argument is a struct with a reference to the final AST sent into the compiler. However, the AST at this point will usually be compressed; to view the AST, call jl_uncompress_ast and then pass the result to jl_:

#2  0x00007ffff7928bf7 in to_function (li=0x2812060, cstyle=false) at codegen.cpp:584
+

gdb debugging tips

gdb debugging tips

Displaying Julia variables

Within gdb, any jl_value_t* object obj can be displayed using

(gdb) call jl_(obj)

The object will be displayed in the julia session, not in the gdb session. This is a useful way to discover the types and values of objects being manipulated by Julia's C code.

Similarly, if you're debugging some of Julia's internals (e.g., compiler.jl), you can print obj using

ccall(:jl_, Cvoid, (Any,), obj)

This is a good way to circumvent problems that arise from the order in which julia's output streams are initialized.

Julia's flisp interpreter uses value_t objects; these can be displayed with call fl_print(fl_ctx, ios_stdout, obj).

Useful Julia variables for Inspecting

While the addresses of many variables, like singletons, can be be useful to print for many failures, there are a number of additional variables (see julia.h for a complete list) that are even more useful.

  • (when in jl_apply_generic) mfunc and jl_uncompress_ast(mfunc->def, mfunc->code) :: for figuring out a bit about the call-stack
  • jl_lineno and jl_filename :: for figuring out what line in a test to go start debugging from (or figure out how far into a file has been parsed)
  • $1 :: not really a variable, but still a useful shorthand for referring to the result of the last gdb command (such as print)
  • jl_options :: sometimes useful, since it lists all of the command line options that were successfully parsed
  • jl_uv_stderr :: because who doesn't like to be able to interact with stdio

Useful Julia functions for Inspecting those variables

  • jl_gdblookup($rip) :: For looking up the current function and line. (use $eip on i686 platforms)
  • jlbacktrace() :: For dumping the current Julia backtrace stack to stderr. Only usable after record_backtrace() has been called.
  • jl_dump_llvm_value(Value*) :: For invoking Value->dump() in gdb, where it doesn't work natively. For example, f->linfo->functionObject, f->linfo->specFunctionObject, and to_function(f->linfo).
  • Type->dump() :: only works in lldb. Note: add something like ;1 to prevent lldb from printing its prompt over the output
  • jl_eval_string("expr") :: for invoking side-effects to modify the current state or to lookup symbols
  • jl_typeof(jl_value_t*) :: for extracting the type tag of a Julia value (in gdb, call macro define jl_typeof jl_typeof first, or pick something short like ty for the first arg to define a shorthand)

Inserting breakpoints for inspection from gdb

In your gdb session, set a breakpoint in jl_breakpoint like so:

(gdb) break jl_breakpoint

Then within your Julia code, insert a call to jl_breakpoint by adding

ccall(:jl_breakpoint, Cvoid, (Any,), obj)

where obj can be any variable or tuple you want to be accessible in the breakpoint.

It's particularly helpful to back up to the jl_apply frame, from which you can display the arguments to a function using, e.g.,

(gdb) call jl_(args[0])

Another useful frame is to_function(jl_method_instance_t *li, bool cstyle). The jl_method_instance_t* argument is a struct with a reference to the final AST sent into the compiler. However, the AST at this point will usually be compressed; to view the AST, call jl_uncompress_ast and then pass the result to jl_:

#2  0x00007ffff7928bf7 in to_function (li=0x2812060, cstyle=false) at codegen.cpp:584
 584          abort();
 (gdb) p jl_(jl_uncompress_ast(li, li->ast))

Inserting breakpoints upon certain conditions

Loading a particular file

Let's say the file is sysimg.jl:

(gdb) break jl_load if strcmp(fname, "sysimg.jl")==0

Calling a particular method

(gdb) break jl_apply_generic if strcmp((char*)(jl_symbol_name)(jl_gf_mtable(F)->name), "method_to_break")==0

Since this function is used for every call, you will make everything 1000x slower if you do this.

Dealing with signals

Julia requires a few signal to function property. The profiler uses SIGUSR2 for sampling and the garbage collector uses SIGSEGV for threads synchronization. If you are debugging some code that uses the profiler or multiple threads, you may want to let the debugger ignore these signals since they can be triggered very often during normal operations. The command to do this in GDB is (replace SIGSEGV with SIGUSRS or other signals you want to ignore):

(gdb) handle SIGSEGV noprint nostop pass

The corresponding LLDB command is (after the process is started):

(lldb) pro hand -p true -s false -n false SIGSEGV

If you are debugging a segfault with threaded code, you can set a breakpoint on jl_critical_error (sigdie_handler should also work on Linux and BSD) in order to only catch the actual segfault rather than the GC synchronization points.

Debugging during Julia's build process (bootstrap)

Errors that occur during make need special handling. Julia is built in two stages, constructing sys0 and sys.ji. To see what commands are running at the time of failure, use make VERBOSE=1.

At the time of this writing, you can debug build errors during the sys0 phase from the base directory using:

julia/base$ gdb --args ../usr/bin/julia-debug -C native --build ../usr/lib/julia/sys0 sysimg.jl

You might need to delete all the files in usr/lib/julia/ to get this to work.

You can debug the sys.ji phase using:

julia/base$ gdb --args ../usr/bin/julia-debug -C native --build ../usr/lib/julia/sys -J ../usr/lib/julia/sys0.ji sysimg.jl

By default, any errors will cause Julia to exit, even under gdb. To catch an error "in the act", set a breakpoint in jl_error (there are several other useful spots, for specific kinds of failures, including: jl_too_few_args, jl_too_many_args, and jl_throw).

Once an error is caught, a useful technique is to walk up the stack and examine the function by inspecting the related call to jl_apply. To take a real-world example:

Breakpoint 1, jl_throw (e=0x7ffdf42de400) at task.c:802
 802 {
diff --git a/en/stable/devdocs/eval/index.html b/en/stable/devdocs/eval/index.html
index 458a3fbb79cfb..290baacdad9ec 100644
--- a/en/stable/devdocs/eval/index.html
+++ b/en/stable/devdocs/eval/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Eval of Julia code

Eval of Julia code

One of the hardest parts about learning how the Julia Language runs code is learning how all of the pieces work together to execute a block of code.

Each chunk of code typically makes a trip through many steps with potentially unfamiliar names, such as (in no particular order): flisp, AST, C++, LLVM, eval, typeinf, macroexpand, sysimg (or system image), bootstrapping, compile, parse, execute, JIT, interpret, box, unbox, intrinsic function, and primitive function, before turning into the desired result (hopefully).

Julia Execution

The 10,000 foot view of the whole process is as follows:

  1. The user starts julia.
  2. The C function main() from ui/repl.c gets called. This function processes the command line arguments, filling in the jl_options struct and setting the variable ARGS. It then initializes Julia (by calling julia_init in task.c, which may load a previously compiled sysimg). Finally, it passes off control to Julia by calling Base._start().
  3. When _start() takes over control, the subsequent sequence of commands depends on the command line arguments given. For example, if a filename was supplied, it will proceed to execute that file. Otherwise, it will start an interactive REPL.
  4. Skipping the details about how the REPL interacts with the user, let's just say the program ends up with a block of code that it wants to run.
  5. If the block of code to run is in a file, jl_load(char *filename) gets invoked to load the file and parse it. Each fragment of code is then passed to eval to execute.
  6. Each fragment of code (or AST), is handed off to eval() to turn into results.
  7. eval() takes each code fragment and tries to run it in jl_toplevel_eval_flex().
  8. jl_toplevel_eval_flex() decides whether the code is a "toplevel" action (such as using or module), which would be invalid inside a function. If so, it passes off the code to the toplevel interpreter.
  9. jl_toplevel_eval_flex() then expands the code to eliminate any macros and to "lower" the AST to make it simpler to execute.
  10. jl_toplevel_eval_flex() then uses some simple heuristics to decide whether to JIT compiler the AST or to interpret it directly.
  11. The bulk of the work to interpret code is handled by eval in interpreter.c.
  12. If instead, the code is compiled, the bulk of the work is handled by codegen.cpp. Whenever a Julia function is called for the first time with a given set of argument types, type inference will be run on that function. This information is used by the codegen step to generate faster code.
  13. Eventually, the user quits the REPL, or the end of the program is reached, and the _start() method returns.
  14. Just before exiting, main() calls jl_atexit_hook(exit_code). This calls Base._atexit() (which calls any functions registered to atexit() inside Julia). Then it calls jl_gc_run_all_finalizers(). Finally, it gracefully cleans up all libuv handles and waits for them to flush and close.

Parsing

The Julia parser is a small lisp program written in femtolisp, the source-code for which is distributed inside Julia in src/flisp.

The interface functions for this are primarily defined in jlfrontend.scm. The code in ast.c handles this handoff on the Julia side.

The other relevant files at this stage are julia-parser.scm, which handles tokenizing Julia code and turning it into an AST, and julia-syntax.scm, which handles transforming complex AST representations into simpler, "lowered" AST representations which are more suitable for analysis and execution.

Macro Expansion

When eval() encounters a macro, it expands that AST node before attempting to evaluate the expression. Macro expansion involves a handoff from eval() (in Julia), to the parser function jl_macroexpand() (written in flisp) to the Julia macro itself (written in - what else - Julia) via fl_invoke_julia_macro(), and back.

Typically, macro expansion is invoked as a first step during a call to Meta.lower()/jl_expand(), although it can also be invoked directly by a call to macroexpand()/jl_macroexpand().

Type Inference

Type inference is implemented in Julia by typeinf() in compiler/typeinfer.jl. Type inference is the process of examining a Julia function and determining bounds for the types of each of its variables, as well as bounds on the type of the return value from the function. This enables many future optimizations, such as unboxing of known immutable values, and compile-time hoisting of various run-time operations such as computing field offsets and function pointers. Type inference may also include other steps such as constant propagation and inlining.

JIT Code Generation

Codegen is the process of turning a Julia AST into native machine code.

The JIT environment is initialized by an early call to jl_init_codegen in codegen.cpp.

On demand, a Julia method is converted into a native function by the function emit_function(jl_method_instance_t*). (note, when using the MCJIT (in LLVM v3.4+), each function must be JIT into a new module.) This function recursively calls emit_expr() until the entire function has been emitted.

Much of the remaining bulk of this file is devoted to various manual optimizations of specific code patterns. For example, emit_known_call() knows how to inline many of the primitive functions (defined in builtins.c) for various combinations of argument types.

Other parts of codegen are handled by various helper files:

  • debuginfo.cpp

    Handles backtraces for JIT functions

  • ccall.cpp

    Handles the ccall and llvmcall FFI, along with various abi_*.cpp files

  • intrinsics.cpp

    Handles the emission of various low-level intrinsic functions

System Image

The system image is a precompiled archive of a set of Julia files. The sys.ji file distributed with Julia is one such system image, generated by executing the file sysimg.jl, and serializing the resulting environment (including Types, Functions, Modules, and all other defined values) into a file. Therefore, it contains a frozen version of the Main, Core, and Base modules (and whatever else was in the environment at the end of bootstrapping). This serializer/deserializer is implemented by jl_save_system_image/jl_restore_system_image in staticdata.c.

If there is no sysimg file (jl_options.image_file == NULL), this also implies that --build was given on the command line, so the final result should be a new sysimg file. During Julia initialization, minimal Core and Main modules are created. Then a file named boot.jl is evaluated from the current directory. Julia then evaluates any file given as a command line argument until it reaches the end. Finally, it saves the resulting environment to a "sysimg" file for use as a starting point for a future Julia run.

+

Eval of Julia code

Eval of Julia code

One of the hardest parts about learning how the Julia Language runs code is learning how all of the pieces work together to execute a block of code.

Each chunk of code typically makes a trip through many steps with potentially unfamiliar names, such as (in no particular order): flisp, AST, C++, LLVM, eval, typeinf, macroexpand, sysimg (or system image), bootstrapping, compile, parse, execute, JIT, interpret, box, unbox, intrinsic function, and primitive function, before turning into the desired result (hopefully).

Julia Execution

The 10,000 foot view of the whole process is as follows:

  1. The user starts julia.
  2. The C function main() from ui/repl.c gets called. This function processes the command line arguments, filling in the jl_options struct and setting the variable ARGS. It then initializes Julia (by calling julia_init in task.c, which may load a previously compiled sysimg). Finally, it passes off control to Julia by calling Base._start().
  3. When _start() takes over control, the subsequent sequence of commands depends on the command line arguments given. For example, if a filename was supplied, it will proceed to execute that file. Otherwise, it will start an interactive REPL.
  4. Skipping the details about how the REPL interacts with the user, let's just say the program ends up with a block of code that it wants to run.
  5. If the block of code to run is in a file, jl_load(char *filename) gets invoked to load the file and parse it. Each fragment of code is then passed to eval to execute.
  6. Each fragment of code (or AST), is handed off to eval() to turn into results.
  7. eval() takes each code fragment and tries to run it in jl_toplevel_eval_flex().
  8. jl_toplevel_eval_flex() decides whether the code is a "toplevel" action (such as using or module), which would be invalid inside a function. If so, it passes off the code to the toplevel interpreter.
  9. jl_toplevel_eval_flex() then expands the code to eliminate any macros and to "lower" the AST to make it simpler to execute.
  10. jl_toplevel_eval_flex() then uses some simple heuristics to decide whether to JIT compiler the AST or to interpret it directly.
  11. The bulk of the work to interpret code is handled by eval in interpreter.c.
  12. If instead, the code is compiled, the bulk of the work is handled by codegen.cpp. Whenever a Julia function is called for the first time with a given set of argument types, type inference will be run on that function. This information is used by the codegen step to generate faster code.
  13. Eventually, the user quits the REPL, or the end of the program is reached, and the _start() method returns.
  14. Just before exiting, main() calls jl_atexit_hook(exit_code). This calls Base._atexit() (which calls any functions registered to atexit() inside Julia). Then it calls jl_gc_run_all_finalizers(). Finally, it gracefully cleans up all libuv handles and waits for them to flush and close.

Parsing

The Julia parser is a small lisp program written in femtolisp, the source-code for which is distributed inside Julia in src/flisp.

The interface functions for this are primarily defined in jlfrontend.scm. The code in ast.c handles this handoff on the Julia side.

The other relevant files at this stage are julia-parser.scm, which handles tokenizing Julia code and turning it into an AST, and julia-syntax.scm, which handles transforming complex AST representations into simpler, "lowered" AST representations which are more suitable for analysis and execution.

Macro Expansion

When eval() encounters a macro, it expands that AST node before attempting to evaluate the expression. Macro expansion involves a handoff from eval() (in Julia), to the parser function jl_macroexpand() (written in flisp) to the Julia macro itself (written in - what else - Julia) via fl_invoke_julia_macro(), and back.

Typically, macro expansion is invoked as a first step during a call to Meta.lower()/jl_expand(), although it can also be invoked directly by a call to macroexpand()/jl_macroexpand().

Type Inference

Type inference is implemented in Julia by typeinf() in compiler/typeinfer.jl. Type inference is the process of examining a Julia function and determining bounds for the types of each of its variables, as well as bounds on the type of the return value from the function. This enables many future optimizations, such as unboxing of known immutable values, and compile-time hoisting of various run-time operations such as computing field offsets and function pointers. Type inference may also include other steps such as constant propagation and inlining.

JIT Code Generation

Codegen is the process of turning a Julia AST into native machine code.

The JIT environment is initialized by an early call to jl_init_codegen in codegen.cpp.

On demand, a Julia method is converted into a native function by the function emit_function(jl_method_instance_t*). (note, when using the MCJIT (in LLVM v3.4+), each function must be JIT into a new module.) This function recursively calls emit_expr() until the entire function has been emitted.

Much of the remaining bulk of this file is devoted to various manual optimizations of specific code patterns. For example, emit_known_call() knows how to inline many of the primitive functions (defined in builtins.c) for various combinations of argument types.

Other parts of codegen are handled by various helper files:

  • debuginfo.cpp

    Handles backtraces for JIT functions

  • ccall.cpp

    Handles the ccall and llvmcall FFI, along with various abi_*.cpp files

  • intrinsics.cpp

    Handles the emission of various low-level intrinsic functions

System Image

The system image is a precompiled archive of a set of Julia files. The sys.ji file distributed with Julia is one such system image, generated by executing the file sysimg.jl, and serializing the resulting environment (including Types, Functions, Modules, and all other defined values) into a file. Therefore, it contains a frozen version of the Main, Core, and Base modules (and whatever else was in the environment at the end of bootstrapping). This serializer/deserializer is implemented by jl_save_system_image/jl_restore_system_image in staticdata.c.

If there is no sysimg file (jl_options.image_file == NULL), this also implies that --build was given on the command line, so the final result should be a new sysimg file. During Julia initialization, minimal Core and Main modules are created. Then a file named boot.jl is evaluated from the current directory. Julia then evaluates any file given as a command line argument until it reaches the end. Finally, it saves the resulting environment to a "sysimg" file for use as a starting point for a future Julia run.

diff --git a/en/stable/devdocs/functions/index.html b/en/stable/devdocs/functions/index.html index ed59b81d2dfc7..148052c9996f6 100644 --- a/en/stable/devdocs/functions/index.html +++ b/en/stable/devdocs/functions/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Julia Functions

Julia Functions

This document will explain how functions, method definitions, and method tables work.

Method Tables

Every function in Julia is a generic function. A generic function is conceptually a single function, but consists of many definitions, or methods. The methods of a generic function are stored in a method table. Method tables (type MethodTable) are associated with TypeNames. A TypeName describes a family of parameterized types. For example Complex{Float32} and Complex{Float64} share the same Complex type name object.

All objects in Julia are potentially callable, because every object has a type, which in turn has a TypeName.

Function calls

Given the call f(x,y), the following steps are performed: first, the method table to use is accessed as typeof(f).name.mt. Second, an argument tuple type is formed, Tuple{typeof(f), typeof(x), typeof(y)}. Note that the type of the function itself is the first element. This is because the type might have parameters, and so needs to take part in dispatch. This tuple type is looked up in the method table.

This dispatch process is performed by jl_apply_generic, which takes two arguments: a pointer to an array of the values f, x, and y, and the number of values (in this case 3).

Throughout the system, there are two kinds of APIs that handle functions and argument lists: those that accept the function and arguments separately, and those that accept a single argument structure. In the first kind of API, the "arguments" part does not contain information about the function, since that is passed separately. In the second kind of API, the function is the first element of the argument structure.

For example, the following function for performing a call accepts just an args pointer, so the first element of the args array will be the function to call:

jl_value_t *jl_apply(jl_value_t **args, uint32_t nargs)

This entry point for the same functionality accepts the function separately, so the args array does not contain the function:

jl_value_t *jl_call(jl_function_t *f, jl_value_t **args, int32_t nargs);

Adding methods

Given the above dispatch process, conceptually all that is needed to add a new method is (1) a tuple type, and (2) code for the body of the method. jl_method_def implements this operation. jl_first_argument_datatype is called to extract the relevant method table from what would be the type of the first argument. This is much more complicated than the corresponding procedure during dispatch, since the argument tuple type might be abstract. For example, we can define:

(::Union{Foo{Int},Foo{Int8}})(x) = 0

which works since all possible matching methods would belong to the same method table.

Creating generic functions

Since every object is callable, nothing special is needed to create a generic function. Therefore jl_new_generic_function simply creates a new singleton (0 size) subtype of Function and returns its instance. A function can have a mnemonic "display name" which is used in debug info and when printing objects. For example the name of Base.sin is sin. By convention, the name of the created type is the same as the function name, with a # prepended. So typeof(sin) is Base.#sin.

Closures

A closure is simply a callable object with field names corresponding to captured variables. For example, the following code:

function adder(x)
+

Julia Functions

Julia Functions

This document will explain how functions, method definitions, and method tables work.

Method Tables

Every function in Julia is a generic function. A generic function is conceptually a single function, but consists of many definitions, or methods. The methods of a generic function are stored in a method table. Method tables (type MethodTable) are associated with TypeNames. A TypeName describes a family of parameterized types. For example Complex{Float32} and Complex{Float64} share the same Complex type name object.

All objects in Julia are potentially callable, because every object has a type, which in turn has a TypeName.

Function calls

Given the call f(x,y), the following steps are performed: first, the method table to use is accessed as typeof(f).name.mt. Second, an argument tuple type is formed, Tuple{typeof(f), typeof(x), typeof(y)}. Note that the type of the function itself is the first element. This is because the type might have parameters, and so needs to take part in dispatch. This tuple type is looked up in the method table.

This dispatch process is performed by jl_apply_generic, which takes two arguments: a pointer to an array of the values f, x, and y, and the number of values (in this case 3).

Throughout the system, there are two kinds of APIs that handle functions and argument lists: those that accept the function and arguments separately, and those that accept a single argument structure. In the first kind of API, the "arguments" part does not contain information about the function, since that is passed separately. In the second kind of API, the function is the first element of the argument structure.

For example, the following function for performing a call accepts just an args pointer, so the first element of the args array will be the function to call:

jl_value_t *jl_apply(jl_value_t **args, uint32_t nargs)

This entry point for the same functionality accepts the function separately, so the args array does not contain the function:

jl_value_t *jl_call(jl_function_t *f, jl_value_t **args, int32_t nargs);

Adding methods

Given the above dispatch process, conceptually all that is needed to add a new method is (1) a tuple type, and (2) code for the body of the method. jl_method_def implements this operation. jl_first_argument_datatype is called to extract the relevant method table from what would be the type of the first argument. This is much more complicated than the corresponding procedure during dispatch, since the argument tuple type might be abstract. For example, we can define:

(::Union{Foo{Int},Foo{Int8}})(x) = 0

which works since all possible matching methods would belong to the same method table.

Creating generic functions

Since every object is callable, nothing special is needed to create a generic function. Therefore jl_new_generic_function simply creates a new singleton (0 size) subtype of Function and returns its instance. A function can have a mnemonic "display name" which is used in debug info and when printing objects. For example the name of Base.sin is sin. By convention, the name of the created type is the same as the function name, with a # prepended. So typeof(sin) is Base.#sin.

Closures

A closure is simply a callable object with field names corresponding to captured variables. For example, the following code:

function adder(x)
     return y->x+y
 end

is lowered to (roughly):

struct ##1{T}
     x::T
diff --git a/en/stable/devdocs/inference/index.html b/en/stable/devdocs/inference/index.html
index 43bcc22553e38..1556732ff335a 100644
--- a/en/stable/devdocs/inference/index.html
+++ b/en/stable/devdocs/inference/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Inference

Inference

How inference works

Type inference refers to the process of deducing the types of later values from the types of input values. Julia's approach to inference has been described in blog posts (1, 2).

Debugging compiler.jl

You can start a Julia session, edit compiler/*.jl (for example to insert print statements), and then replace Core.Compiler in your running session by navigating to base/compiler and executing include("compiler.jl"). This trick typically leads to much faster development than if you rebuild Julia for each change.

A convenient entry point into inference is typeinf_code. Here's a demo running inference on convert(Int, UInt(1)):

# Get the method
+

Inference

Inference

How inference works

Type inference refers to the process of deducing the types of later values from the types of input values. Julia's approach to inference has been described in blog posts (1, 2).

Debugging compiler.jl

You can start a Julia session, edit compiler/*.jl (for example to insert print statements), and then replace Core.Compiler in your running session by navigating to base/compiler and executing include("compiler.jl"). This trick typically leads to much faster development than if you rebuild Julia for each change.

A convenient entry point into inference is typeinf_code. Here's a demo running inference on convert(Int, UInt(1)):

# Get the method
 atypes = Tuple{Type{Int}, UInt}  # argument types
 mths = methods(convert, atypes)  # worth checking that there is only one
 m = first(mths)
diff --git a/en/stable/devdocs/init/index.html b/en/stable/devdocs/init/index.html
index 4f5a9967d294b..6b76776f49555 100644
--- a/en/stable/devdocs/init/index.html
+++ b/en/stable/devdocs/init/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Initialization of the Julia runtime

Initialization of the Julia runtime

How does the Julia runtime execute julia -e 'println("Hello World!")' ?

main()

Execution starts at main() in ui/repl.c.

main() calls libsupport_init() to set the C library locale and to initialize the "ios" library (see ios_init_stdstreams() and Legacy ios.c library).

Next jl_parse_opts() is called to process command line options. Note that jl_parse_opts() only deals with options that affect code generation or early initialization. Other options are handled later by process_options() in base/client.jl.

jl_parse_opts() stores command line options in the global jl_options struct.

julia_init()

julia_init() in task.c is called by main() and calls _julia_init() in init.c.

_julia_init() begins by calling libsupport_init() again (it does nothing the second time).

restore_signals() is called to zero the signal handler mask.

jl_resolve_sysimg_location() searches configured paths for the base system image. See Building the Julia system image.

jl_gc_init() sets up allocation pools and lists for weak refs, preserved values and finalization.

jl_init_frontend() loads and initializes a pre-compiled femtolisp image containing the scanner/parser.

jl_init_types() creates jl_datatype_t type description objects for the built-in types defined in julia.h. e.g.

jl_any_type = jl_new_abstracttype(jl_symbol("Any"), core, NULL, jl_emptysvec);
+

Initialization of the Julia runtime

Initialization of the Julia runtime

How does the Julia runtime execute julia -e 'println("Hello World!")' ?

main()

Execution starts at main() in ui/repl.c.

main() calls libsupport_init() to set the C library locale and to initialize the "ios" library (see ios_init_stdstreams() and Legacy ios.c library).

Next jl_parse_opts() is called to process command line options. Note that jl_parse_opts() only deals with options that affect code generation or early initialization. Other options are handled later by process_options() in base/client.jl.

jl_parse_opts() stores command line options in the global jl_options struct.

julia_init()

julia_init() in task.c is called by main() and calls _julia_init() in init.c.

_julia_init() begins by calling libsupport_init() again (it does nothing the second time).

restore_signals() is called to zero the signal handler mask.

jl_resolve_sysimg_location() searches configured paths for the base system image. See Building the Julia system image.

jl_gc_init() sets up allocation pools and lists for weak refs, preserved values and finalization.

jl_init_frontend() loads and initializes a pre-compiled femtolisp image containing the scanner/parser.

jl_init_types() creates jl_datatype_t type description objects for the built-in types defined in julia.h. e.g.

jl_any_type = jl_new_abstracttype(jl_symbol("Any"), core, NULL, jl_emptysvec);
 jl_any_type->super = jl_any_type;
 
 jl_type_type = jl_new_abstracttype(jl_symbol("Type"), core, jl_any_type, jl_emptysvec);
diff --git a/en/stable/devdocs/isbitsunionarrays/index.html b/en/stable/devdocs/isbitsunionarrays/index.html
index 2879f8b9f9d77..a8e837fb2ed25 100644
--- a/en/stable/devdocs/isbitsunionarrays/index.html
+++ b/en/stable/devdocs/isbitsunionarrays/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

isbits Union Optimizations

isbits Union Optimizations

In Julia, the Array type holds both "bits" values as well as heap-allocated "boxed" values. The distinction is whether the value itself is stored inline (in the direct allocated memory of the array), or if the memory of the array is simply a collection of pointers to objects allocated elsewhere. In terms of performance, accessing values inline is clearly an advantage over having to follow a pointer to the actual value. The definition of "isbits" generally means any Julia type with a fixed, determinate size, meaning no "pointer" fields, see ?isbitstype.

Julia also supports Union types, quite literally the union of a set of types. Custom Union type definitions can be extremely handy for applications wishing to "cut across" the nominal type system (i.e. explicit subtype relationships) and define methods or functionality on these, otherwise unrelated, set of types. A compiler challenge, however, is in determining how to treat these Union types. The naive approach (and indeed, what Julia itself did pre-0.7), is to simply make a "box" and then a pointer in the box to the actual value, similar to the previously mentioned "boxed" values. This is unfortunate, however, because of the number of small, primitive "bits" types (think UInt8, Int32, Float64, etc.) that would easily fit themselves inline in this "box" without needing any indirection for value access. There are two main ways Julia can take advantage of this optimization as of 0.7: isbits Union fields in types, and isbits Union Arrays.

isbits Union Structs

Julia now includes an optimization wherein "isbits Union" fields in types (mutable struct, struct, etc.) will be stored inline. This is accomplished by determining the "inline size" of the Union type (e.g. Union{UInt8, Int16} will have a size of 16 bytes, which represents the size needed of the largest Union type Int16), and in addition, allocating an extra "type tag byte" (UInt8), whose value signals the type of the actual value stored inline of the "Union bytes". The type tag byte value is the index of the actual value's type in the Union type's order of types. For example, a type tag value of 0x02 for a field with type Union{Nothing, UInt8, Int16} would indicate that an Int16 value is stored in the 16 bytes of the field in the structure's memory; a 0x01 value would indicate that a UInt8 value was stored in the first 8 bytes of the 16 bytes of the field's memory. Lastly, a value of 0x00 signals that the nothing value will be returned for this field, even though, as a singleton type with a single type instance, it technically has a size of 0. The type tag byte for a type's Union field is stored directly after the field's computed Union memory.

isbits Union Arrays

Julia can now also store "isbits Union" values inline in an Array, as opposed to requiring an indirection box. The optimization is accomplished by storing an extra "type tag array" of bytes, one byte per array element, alongside the bytes of the actual array data. This type tag array serves the same function as the type field case: it's value signals the type of the actual stored Union value in the array. In terms of layout, a Julia Array can include extra "buffer" space before and after it's actual data values, which are tracked in the a->offset and a->maxsize fields of the jl_array_t* type. The "type tag array" is treated exactly as another jl_array_t*, but which shares the same a->offset, a->maxsize, and a->len fields. So the formula to access an isbits Union Array's type tag bytes is a->data + (a->maxsize - a->offset) * a->elsize + a->offset; i.e. the Array's a->data pointer is already shifted by a->offset, so correcting for that, we follow the data all the way to the max of what it can hold a->maxsize, then adjust by a->ofset more bytes to account for any present "front buffering" the array might be doing. This layout in particular allows for very efficient resizing operations as the type tag data only ever has to move when the actual array's data has to move.

+

isbits Union Optimizations

isbits Union Optimizations

In Julia, the Array type holds both "bits" values as well as heap-allocated "boxed" values. The distinction is whether the value itself is stored inline (in the direct allocated memory of the array), or if the memory of the array is simply a collection of pointers to objects allocated elsewhere. In terms of performance, accessing values inline is clearly an advantage over having to follow a pointer to the actual value. The definition of "isbits" generally means any Julia type with a fixed, determinate size, meaning no "pointer" fields, see ?isbitstype.

Julia also supports Union types, quite literally the union of a set of types. Custom Union type definitions can be extremely handy for applications wishing to "cut across" the nominal type system (i.e. explicit subtype relationships) and define methods or functionality on these, otherwise unrelated, set of types. A compiler challenge, however, is in determining how to treat these Union types. The naive approach (and indeed, what Julia itself did pre-0.7), is to simply make a "box" and then a pointer in the box to the actual value, similar to the previously mentioned "boxed" values. This is unfortunate, however, because of the number of small, primitive "bits" types (think UInt8, Int32, Float64, etc.) that would easily fit themselves inline in this "box" without needing any indirection for value access. There are two main ways Julia can take advantage of this optimization as of 0.7: isbits Union fields in types, and isbits Union Arrays.

isbits Union Structs

Julia now includes an optimization wherein "isbits Union" fields in types (mutable struct, struct, etc.) will be stored inline. This is accomplished by determining the "inline size" of the Union type (e.g. Union{UInt8, Int16} will have a size of 16 bytes, which represents the size needed of the largest Union type Int16), and in addition, allocating an extra "type tag byte" (UInt8), whose value signals the type of the actual value stored inline of the "Union bytes". The type tag byte value is the index of the actual value's type in the Union type's order of types. For example, a type tag value of 0x02 for a field with type Union{Nothing, UInt8, Int16} would indicate that an Int16 value is stored in the 16 bytes of the field in the structure's memory; a 0x01 value would indicate that a UInt8 value was stored in the first 8 bytes of the 16 bytes of the field's memory. Lastly, a value of 0x00 signals that the nothing value will be returned for this field, even though, as a singleton type with a single type instance, it technically has a size of 0. The type tag byte for a type's Union field is stored directly after the field's computed Union memory.

isbits Union Arrays

Julia can now also store "isbits Union" values inline in an Array, as opposed to requiring an indirection box. The optimization is accomplished by storing an extra "type tag array" of bytes, one byte per array element, alongside the bytes of the actual array data. This type tag array serves the same function as the type field case: it's value signals the type of the actual stored Union value in the array. In terms of layout, a Julia Array can include extra "buffer" space before and after it's actual data values, which are tracked in the a->offset and a->maxsize fields of the jl_array_t* type. The "type tag array" is treated exactly as another jl_array_t*, but which shares the same a->offset, a->maxsize, and a->len fields. So the formula to access an isbits Union Array's type tag bytes is a->data + (a->maxsize - a->offset) * a->elsize + a->offset; i.e. the Array's a->data pointer is already shifted by a->offset, so correcting for that, we follow the data all the way to the max of what it can hold a->maxsize, then adjust by a->ofset more bytes to account for any present "front buffering" the array might be doing. This layout in particular allows for very efficient resizing operations as the type tag data only ever has to move when the actual array's data has to move.

diff --git a/en/stable/devdocs/llvm/index.html b/en/stable/devdocs/llvm/index.html index 52d7511a80a74..ca810d172c49a 100644 --- a/en/stable/devdocs/llvm/index.html +++ b/en/stable/devdocs/llvm/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Working with LLVM

Working with LLVM

This is not a replacement for the LLVM documentation, but a collection of tips for working on LLVM for Julia.

Overview of Julia to LLVM Interface

Julia dynamically links against LLVM by default. Build with USE_LLVM_SHLIB=0 to link statically.

The code for lowering Julia AST to LLVM IR or interpreting it directly is in directory src/.

FileDescription
builtins.cBuiltin functions
ccall.cppLowering ccall
cgutils.cppLowering utilities, notably for array and tuple accesses
codegen.cppTop-level of code generation, pass list, lowering builtins
debuginfo.cppTracks debug information for JIT code
disasm.cppHandles native object file and JIT code diassembly
gf.cGeneric functions
intrinsics.cppLowering intrinsics
llvm-simdloop.cppCustom LLVM pass for @simd
sys.cI/O and operating system utility functions

Some of the .cpp files form a group that compile to a single object.

The difference between an intrinsic and a builtin is that a builtin is a first class function that can be used like any other Julia function. An intrinsic can operate only on unboxed data, and therefore its arguments must be statically typed.

Alias Analysis

Julia currently uses LLVM's Type Based Alias Analysis. To find the comments that document the inclusion relationships, look for static MDNode* in src/codegen.cpp.

The -O option enables LLVM's Basic Alias Analysis.

Building Julia with a different version of LLVM

The default version of LLVM is specified in deps/Versions.make. You can override it by creating a file called Make.user in the top-level directory and adding a line to it such as:

LLVM_VER = 3.5.0

Besides the LLVM release numerals, you can also use LLVM_VER = svn to build against the latest development version of LLVM.

You can also specify to build a debug version of LLVM, by setting either LLVM_DEBUG = 1 or LLVM_DEBUG = Release in your Make.user file. The former will be a fully unoptimized build of LLVM and the latter will produce an optimized build of LLVM. Depending on your needs the latter will suffice and it quite a bit faster. If you use LLVM_DEBUG = Release you will also want to set LLVM_ASSERTIONS = 1 to enable diagonstics for different passes. Only LLVM_DEBUG = 1 implies that option by default.

Passing options to LLVM

You can pass options to LLVM via the environment variable JULIA_LLVM_ARGS. Here are example settings using bash syntax:

  • export JULIA_LLVM_ARGS = -print-after-all dumps IR after each pass.
  • export JULIA_LLVM_ARGS = -debug-only=loop-vectorize dumps LLVM DEBUG(...) diagnostics for loop vectorizer. If you get warnings about "Unknown command line argument", rebuild LLVM with LLVM_ASSERTIONS = 1.

Debugging LLVM transformations in isolation

On occasion, it can be useful to debug LLVM's transformations in isolation from the rest of the Julia system, e.g. because reproducing the issue inside julia would take too long, or because one wants to take advantage of LLVM's tooling (e.g. bugpoint). To get unoptimized IR for the entire system image, pass the --output-unopt-bc unopt.bc option to the system image build process, which will output the unoptimized IR to an unopt.bc file. This file can then be passed to LLVM tools as usual. libjulia can function as an LLVM pass plugin and can be loaded into LLVM tools, to make julia-specific passes available in this environment. In addition, it exposes the -julia meta-pass, which runs the entire Julia pass-pipeline over the IR. As an example, to generate a system image, one could do:

opt -load libjulia.so -julia -o opt.bc unopt.bc
+

Working with LLVM

Working with LLVM

This is not a replacement for the LLVM documentation, but a collection of tips for working on LLVM for Julia.

Overview of Julia to LLVM Interface

Julia dynamically links against LLVM by default. Build with USE_LLVM_SHLIB=0 to link statically.

The code for lowering Julia AST to LLVM IR or interpreting it directly is in directory src/.

FileDescription
builtins.cBuiltin functions
ccall.cppLowering ccall
cgutils.cppLowering utilities, notably for array and tuple accesses
codegen.cppTop-level of code generation, pass list, lowering builtins
debuginfo.cppTracks debug information for JIT code
disasm.cppHandles native object file and JIT code diassembly
gf.cGeneric functions
intrinsics.cppLowering intrinsics
llvm-simdloop.cppCustom LLVM pass for @simd
sys.cI/O and operating system utility functions

Some of the .cpp files form a group that compile to a single object.

The difference between an intrinsic and a builtin is that a builtin is a first class function that can be used like any other Julia function. An intrinsic can operate only on unboxed data, and therefore its arguments must be statically typed.

Alias Analysis

Julia currently uses LLVM's Type Based Alias Analysis. To find the comments that document the inclusion relationships, look for static MDNode* in src/codegen.cpp.

The -O option enables LLVM's Basic Alias Analysis.

Building Julia with a different version of LLVM

The default version of LLVM is specified in deps/Versions.make. You can override it by creating a file called Make.user in the top-level directory and adding a line to it such as:

LLVM_VER = 3.5.0

Besides the LLVM release numerals, you can also use LLVM_VER = svn to build against the latest development version of LLVM.

You can also specify to build a debug version of LLVM, by setting either LLVM_DEBUG = 1 or LLVM_DEBUG = Release in your Make.user file. The former will be a fully unoptimized build of LLVM and the latter will produce an optimized build of LLVM. Depending on your needs the latter will suffice and it quite a bit faster. If you use LLVM_DEBUG = Release you will also want to set LLVM_ASSERTIONS = 1 to enable diagonstics for different passes. Only LLVM_DEBUG = 1 implies that option by default.

Passing options to LLVM

You can pass options to LLVM via the environment variable JULIA_LLVM_ARGS. Here are example settings using bash syntax:

  • export JULIA_LLVM_ARGS = -print-after-all dumps IR after each pass.
  • export JULIA_LLVM_ARGS = -debug-only=loop-vectorize dumps LLVM DEBUG(...) diagnostics for loop vectorizer. If you get warnings about "Unknown command line argument", rebuild LLVM with LLVM_ASSERTIONS = 1.

Debugging LLVM transformations in isolation

On occasion, it can be useful to debug LLVM's transformations in isolation from the rest of the Julia system, e.g. because reproducing the issue inside julia would take too long, or because one wants to take advantage of LLVM's tooling (e.g. bugpoint). To get unoptimized IR for the entire system image, pass the --output-unopt-bc unopt.bc option to the system image build process, which will output the unoptimized IR to an unopt.bc file. This file can then be passed to LLVM tools as usual. libjulia can function as an LLVM pass plugin and can be loaded into LLVM tools, to make julia-specific passes available in this environment. In addition, it exposes the -julia meta-pass, which runs the entire Julia pass-pipeline over the IR. As an example, to generate a system image, one could do:

opt -load libjulia.so -julia -o opt.bc unopt.bc
 llc -o sys.o opt.bc
 cc -shared -o sys.so sys.o

This system image can then be loaded by julia as usual.

Alternatively, you can use --output-jit-bc jit.bc to obtain a trace of all IR passed to the JIT. This is useful for code that cannot be run as part of the sysimg generation process (e.g. because it creates unserializable state). However, the resulting jit.bc does not include sysimage data, and can thus not be used as such.

It is also possible to dump an LLVM IR module for just one Julia function, using:

f, T = +, Tuple{Int,Int} # Substitute your function of interest here
 optimize = false
diff --git a/en/stable/devdocs/locks/index.html b/en/stable/devdocs/locks/index.html
index c03186e473525..2eb08aee2f167 100644
--- a/en/stable/devdocs/locks/index.html
+++ b/en/stable/devdocs/locks/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Proper maintenance and care of multi-threading locks

Proper maintenance and care of multi-threading locks

The following strategies are used to ensure that the code is dead-lock free (generally by addressing the 4th Coffman condition: circular wait).

  1. structure code such that only one lock will need to be acquired at a time
  2. always acquire shared locks in the same order, as given by the table below
  3. avoid constructs that expect to need unrestricted recursion

Locks

Below are all of the locks that exist in the system and the mechanisms for using them that avoid the potential for deadlocks (no Ostrich algorithm allowed here):

The following are definitely leaf locks (level 1), and must not try to acquire any other lock:

  • safepoint

    Note that this lock is acquired implicitly by JL_LOCK and JL_UNLOCK. use the _NOGC variants to avoid that for level 1 locks.

    While holding this lock, the code must not do any allocation or hit any safepoints. Note that there are safepoints when doing allocation, enabling / disabling GC, entering / restoring exception frames, and taking / releasing locks.

  • shared_map

  • finalizers

  • pagealloc

  • gcpermlock

  • flisp

    flisp itself is already threadsafe, this lock only protects the jl_ast_context_list_t pool

The following is a leaf lock (level 2), and only acquires level 1 locks (safepoint) internally:

  • typecache

The following is a level 3 lock, which can only acquire level 1 or level 2 locks internally:

  • Method->writelock

The following is a level 4 lock, which can only recurse to acquire level 1, 2, or 3 locks:

  • MethodTable->writelock

No Julia code may be called while holding a lock above this point.

The following is a level 6 lock, which can only recurse to acquire locks at lower levels:

  • codegen

The following is an almost root lock (level end-1), meaning only the root look may be held when trying to acquire it:

  • typeinf

    this one is perhaps one of the most tricky ones, since type-inference can be invoked from many points

    currently the lock is merged with the codegen lock, since they call each other recursively

The following is the root lock, meaning no other lock shall be held when trying to acquire it:

  • toplevel

    this should be held while attempting a top-level action (such as making a new type or defining a new method): trying to obtain this lock inside a staged function will cause a deadlock condition!

    additionally, it's unclear if any code can safely run in parallel with an arbitrary toplevel expression, so it may require all threads to get to a safepoint first

Broken Locks

The following locks are broken:

  • toplevel

    doesn't exist right now

    fix: create it

Shared Global Data Structures

These data structures each need locks due to being shared mutable global state. It is the inverse list for the above lock priority list. This list does not include level 1 leaf resources due to their simplicity.

MethodTable modifications (def, cache, kwsorter type) : MethodTable->writelock

Type declarations : toplevel lock

Type application : typecache lock

Module serializer : toplevel lock

JIT & type-inference : codegen lock

MethodInstance updates : codegen lock

  • These fields are generally lazy initialized, using the test-and-test-and-set pattern.

  • These are set at construction and immutable:

    • specTypes
    • sparam_vals
    • def
  • These are set by jl_type_infer (while holding codegen lock):

    • rettype
    • inferred
    • these can also be reset, see jl_set_lambda_rettype for that logic as it needs to keep functionObjectsDecls in sync
  • inInference flag:

    • optimization to quickly avoid recurring into jl_type_infer while it is already running
    • actual state (of setting inferred, then fptr) is protected by codegen lock
  • Function pointers (jlcall_api and fptr, unspecialized_ducttape):

    • these transition once, from NULL to a value, while the codegen lock is held
  • Code-generator cache (the contents of functionObjectsDecls):

    • these can transition multiple times, but only while the codegen lock is held
    • it is valid to use old version of this, or block for new versions of this, so races are benign, as long as the code is careful not to reference other data in the method instance (such as rettype) and assume it is coordinated, unless also holding the codegen lock
  • compile_traced flag:

    • unknown

LLVMContext : codegen lock

Method : Method->writelock

  • roots array (serializer and codegen)
  • invoke / specializations / tfunc modifications
+

Proper maintenance and care of multi-threading locks

Proper maintenance and care of multi-threading locks

The following strategies are used to ensure that the code is dead-lock free (generally by addressing the 4th Coffman condition: circular wait).

  1. structure code such that only one lock will need to be acquired at a time
  2. always acquire shared locks in the same order, as given by the table below
  3. avoid constructs that expect to need unrestricted recursion

Locks

Below are all of the locks that exist in the system and the mechanisms for using them that avoid the potential for deadlocks (no Ostrich algorithm allowed here):

The following are definitely leaf locks (level 1), and must not try to acquire any other lock:

  • safepoint

    Note that this lock is acquired implicitly by JL_LOCK and JL_UNLOCK. use the _NOGC variants to avoid that for level 1 locks.

    While holding this lock, the code must not do any allocation or hit any safepoints. Note that there are safepoints when doing allocation, enabling / disabling GC, entering / restoring exception frames, and taking / releasing locks.

  • shared_map

  • finalizers

  • pagealloc

  • gcpermlock

  • flisp

    flisp itself is already threadsafe, this lock only protects the jl_ast_context_list_t pool

The following is a leaf lock (level 2), and only acquires level 1 locks (safepoint) internally:

  • typecache

The following is a level 3 lock, which can only acquire level 1 or level 2 locks internally:

  • Method->writelock

The following is a level 4 lock, which can only recurse to acquire level 1, 2, or 3 locks:

  • MethodTable->writelock

No Julia code may be called while holding a lock above this point.

The following is a level 6 lock, which can only recurse to acquire locks at lower levels:

  • codegen

The following is an almost root lock (level end-1), meaning only the root look may be held when trying to acquire it:

  • typeinf

    this one is perhaps one of the most tricky ones, since type-inference can be invoked from many points

    currently the lock is merged with the codegen lock, since they call each other recursively

The following is the root lock, meaning no other lock shall be held when trying to acquire it:

  • toplevel

    this should be held while attempting a top-level action (such as making a new type or defining a new method): trying to obtain this lock inside a staged function will cause a deadlock condition!

    additionally, it's unclear if any code can safely run in parallel with an arbitrary toplevel expression, so it may require all threads to get to a safepoint first

Broken Locks

The following locks are broken:

  • toplevel

    doesn't exist right now

    fix: create it

Shared Global Data Structures

These data structures each need locks due to being shared mutable global state. It is the inverse list for the above lock priority list. This list does not include level 1 leaf resources due to their simplicity.

MethodTable modifications (def, cache, kwsorter type) : MethodTable->writelock

Type declarations : toplevel lock

Type application : typecache lock

Module serializer : toplevel lock

JIT & type-inference : codegen lock

MethodInstance updates : codegen lock

  • These fields are generally lazy initialized, using the test-and-test-and-set pattern.

  • These are set at construction and immutable:

    • specTypes
    • sparam_vals
    • def
  • These are set by jl_type_infer (while holding codegen lock):

    • rettype
    • inferred
    • these can also be reset, see jl_set_lambda_rettype for that logic as it needs to keep functionObjectsDecls in sync
  • inInference flag:

    • optimization to quickly avoid recurring into jl_type_infer while it is already running
    • actual state (of setting inferred, then fptr) is protected by codegen lock
  • Function pointers (jlcall_api and fptr, unspecialized_ducttape):

    • these transition once, from NULL to a value, while the codegen lock is held
  • Code-generator cache (the contents of functionObjectsDecls):

    • these can transition multiple times, but only while the codegen lock is held
    • it is valid to use old version of this, or block for new versions of this, so races are benign, as long as the code is careful not to reference other data in the method instance (such as rettype) and assume it is coordinated, unless also holding the codegen lock
  • compile_traced flag:

    • unknown

LLVMContext : codegen lock

Method : Method->writelock

  • roots array (serializer and codegen)
  • invoke / specializations / tfunc modifications
diff --git a/en/stable/devdocs/meta/index.html b/en/stable/devdocs/meta/index.html index 3433a1972efde..b290ff2b8cd85 100644 --- a/en/stable/devdocs/meta/index.html +++ b/en/stable/devdocs/meta/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Talking to the compiler (the :meta mechanism)

Talking to the compiler (the :meta mechanism)

In some circumstances, one might wish to provide hints or instructions that a given block of code has special properties: you might always want to inline it, or you might want to turn on special compiler optimization passes. Starting with version 0.4, Julia has a convention that these instructions can be placed inside a :meta expression, which is typically (but not necessarily) the first expression in the body of a function.

:meta expressions are created with macros. As an example, consider the implementation of the @inline macro:

macro inline(ex)
+

Talking to the compiler (the :meta mechanism)

Talking to the compiler (the :meta mechanism)

In some circumstances, one might wish to provide hints or instructions that a given block of code has special properties: you might always want to inline it, or you might want to turn on special compiler optimization passes. Starting with version 0.4, Julia has a convention that these instructions can be placed inside a :meta expression, which is typically (but not necessarily) the first expression in the body of a function.

:meta expressions are created with macros. As an example, consider the implementation of the @inline macro:

macro inline(ex)
     esc(isa(ex, Expr) ? pushmeta!(ex, :inline) : ex)
 end

Here, ex is expected to be an expression defining a function. A statement like this:

@inline function myfunction(x)
     x*(x+3)
diff --git a/en/stable/devdocs/object/index.html b/en/stable/devdocs/object/index.html
index 49d54fb477f73..a7e4504a374bf 100644
--- a/en/stable/devdocs/object/index.html
+++ b/en/stable/devdocs/object/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Memory layout of Julia Objects

Memory layout of Julia Objects

Object layout (jlvaluet)

The jl_value_t struct is the name for a block of memory owned by the Julia Garbage Collector, representing the data associated with a Julia object in memory. Absent any type information, it is simply an opaque pointer:

typedef struct jl_value_t* jl_pvalue_t;

Each jl_value_t struct is contained in a jl_typetag_t struct that contains metadata information about the Julia object, such as its type and garbage collector (gc) reachability:

typedef struct {
+

Memory layout of Julia Objects

Memory layout of Julia Objects

Object layout (jlvaluet)

The jl_value_t struct is the name for a block of memory owned by the Julia Garbage Collector, representing the data associated with a Julia object in memory. Absent any type information, it is simply an opaque pointer:

typedef struct jl_value_t* jl_pvalue_t;

Each jl_value_t struct is contained in a jl_typetag_t struct that contains metadata information about the Julia object, such as its type and garbage collector (gc) reachability:

typedef struct {
     opaque metadata;
     jl_value_t value;
 } jl_typetag_t;

The type of any Julia object is an instance of a leaf jl_datatype_t object. The jl_typeof() function can be used to query for it:

jl_value_t *jl_typeof(jl_value_t *v);

The layout of the object depends on its type. Reflection methods can be used to inspect that layout. A field can be accessed by calling one of the get-field methods:

jl_value_t *jl_get_nth_field_checked(jl_value_t *v, size_t i);
diff --git a/en/stable/devdocs/offset-arrays/index.html b/en/stable/devdocs/offset-arrays/index.html
index 8be5a774e4e9b..6e29aed2164d2 100644
--- a/en/stable/devdocs/offset-arrays/index.html
+++ b/en/stable/devdocs/offset-arrays/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Arrays with custom indices

Arrays with custom indices

Conventionally, Julia's arrays are indexed starting at 1, whereas some other languages start numbering at 0, and yet others (e.g., Fortran) allow you to specify arbitrary starting indices. While there is much merit in picking a standard (i.e., 1 for Julia), there are some algorithms which simplify considerably if you can index outside the range 1:size(A,d) (and not just 0:size(A,d)-1, either). To facilitate such computations, Julia supports arrays with arbitrary indices.

The purpose of this page is to address the question, "what do I have to do to support such arrays in my own code?" First, let's address the simplest case: if you know that your code will never need to handle arrays with unconventional indexing, hopefully the answer is "nothing." Old code, on conventional arrays, should function essentially without alteration as long as it was using the exported interfaces of Julia. If you find it more convenient to just force your users to supply traditional arrays where indexing starts at one, you can add

@assert !Base.has_offset_axes(arrays...)

where arrays... is a list of the array objects that you wish to check for anything that violates 1-based indexing.

Generalizing existing code

As an overview, the steps are:

  • replace many uses of size with axes
  • replace 1:length(A) with eachindex(A), or in some cases LinearIndices(A)
  • replace explicit allocations like Array{Int}(size(B)) with similar(Array{Int}, axes(B))

These are described in more detail below.

Things to watch out for

Because unconventional indexing breaks many people's assumptions that all arrays start indexing with 1, there is always the chance that using such arrays will trigger errors. The most frustrating bugs would be incorrect results or segfaults (total crashes of Julia). For example, consider the following function:

function mycopy!(dest::AbstractVector, src::AbstractVector)
+

Arrays with custom indices

Arrays with custom indices

Conventionally, Julia's arrays are indexed starting at 1, whereas some other languages start numbering at 0, and yet others (e.g., Fortran) allow you to specify arbitrary starting indices. While there is much merit in picking a standard (i.e., 1 for Julia), there are some algorithms which simplify considerably if you can index outside the range 1:size(A,d) (and not just 0:size(A,d)-1, either). To facilitate such computations, Julia supports arrays with arbitrary indices.

The purpose of this page is to address the question, "what do I have to do to support such arrays in my own code?" First, let's address the simplest case: if you know that your code will never need to handle arrays with unconventional indexing, hopefully the answer is "nothing." Old code, on conventional arrays, should function essentially without alteration as long as it was using the exported interfaces of Julia. If you find it more convenient to just force your users to supply traditional arrays where indexing starts at one, you can add

@assert !Base.has_offset_axes(arrays...)

where arrays... is a list of the array objects that you wish to check for anything that violates 1-based indexing.

Generalizing existing code

As an overview, the steps are:

  • replace many uses of size with axes
  • replace 1:length(A) with eachindex(A), or in some cases LinearIndices(A)
  • replace explicit allocations like Array{Int}(size(B)) with similar(Array{Int}, axes(B))

These are described in more detail below.

Things to watch out for

Because unconventional indexing breaks many people's assumptions that all arrays start indexing with 1, there is always the chance that using such arrays will trigger errors. The most frustrating bugs would be incorrect results or segfaults (total crashes of Julia). For example, consider the following function:

function mycopy!(dest::AbstractVector, src::AbstractVector)
     length(dest) == length(src) || throw(DimensionMismatch("vectors must match"))
     # OK, now we're safe to use @inbounds, right? (not anymore!)
     for i = 1:length(src)
diff --git a/en/stable/devdocs/reflection/index.html b/en/stable/devdocs/reflection/index.html
index 55e9066ca6f15..0a629062f0dad 100644
--- a/en/stable/devdocs/reflection/index.html
+++ b/en/stable/devdocs/reflection/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Reflection and introspection

Reflection and introspection

Julia provides a variety of runtime reflection capabilities.

Module bindings

The exported names for a Module are available using names(m::Module), which will return an array of Symbol elements representing the exported bindings. names(m::Module, all = true) returns symbols for all bindings in m, regardless of export status.

DataType fields

The names of DataType fields may be interrogated using fieldnames. For example, given the following type, fieldnames(Point) returns a tuple of Symbols representing the field names:

julia> struct Point
+

Reflection and introspection

Reflection and introspection

Julia provides a variety of runtime reflection capabilities.

Module bindings

The exported names for a Module are available using names(m::Module), which will return an array of Symbol elements representing the exported bindings. names(m::Module, all = true) returns symbols for all bindings in m, regardless of export status.

DataType fields

The names of DataType fields may be interrogated using fieldnames. For example, given the following type, fieldnames(Point) returns a tuple of Symbols representing the field names:

julia> struct Point
            x::Int
            y
        end
diff --git a/en/stable/devdocs/require/index.html b/en/stable/devdocs/require/index.html
index a5f391f8ec7a9..2b39a34e05d7b 100644
--- a/en/stable/devdocs/require/index.html
+++ b/en/stable/devdocs/require/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Module loading

Module loading

Base.require is responsible for loading modules and it also manages the precompilation cache. It is the implementation of the import statement.

Experimental features

The features below are experimental and not part of the stable Julia API. Before building upon them inform yourself about the current thinking and whether they might change soon.

Module loading callbacks

It is possible to listen to the modules loaded by Base.require, by registering a callback.

loaded_packages = Channel{Symbol}()
+

Module loading

Module loading

Base.require is responsible for loading modules and it also manages the precompilation cache. It is the implementation of the import statement.

Experimental features

The features below are experimental and not part of the stable Julia API. Before building upon them inform yourself about the current thinking and whether they might change soon.

Module loading callbacks

It is possible to listen to the modules loaded by Base.require, by registering a callback.

loaded_packages = Channel{Symbol}()
 callback = (mod::Symbol) -> put!(loaded_packages, mod)
 push!(Base.package_callbacks, callback)

Please note that the symbol given to the callback is a non-unique identifier and it is the responsibility of the callback provider to walk the module chain to determine the fully qualified name of the loaded binding.

The callback below is an example of how to do that:

# Get the fully-qualified name of a module.
 function module_fqn(name::Symbol)
diff --git a/en/stable/devdocs/sanitizers/index.html b/en/stable/devdocs/sanitizers/index.html
index 5218db6e1e7d7..65061fd05147f 100644
--- a/en/stable/devdocs/sanitizers/index.html
+++ b/en/stable/devdocs/sanitizers/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Sanitizer support

Sanitizer support

General considerations

Using Clang's sanitizers obviously require you to use Clang (USECLANG=1), but there's another catch: most sanitizers require a run-time library, provided by the host compiler, while the instrumented code generated by Julia's JIT relies on functionality from that library. This implies that the LLVM version of your host compiler matches that of the LLVM library used within Julia.

An easy solution is to have an dedicated build folder for providing a matching toolchain, by building with BUILD_LLVM_CLANG=1. You can then refer to this toolchain from another build folder by specifying USECLANG=1 while overriding the CC and CXX variables.

Address Sanitizer (ASAN)

For detecting or debugging memory bugs, you can use Clang's address sanitizer (ASAN). By compiling with SANITIZE=1 you enable ASAN for the Julia compiler and its generated code. In addition, you can specify LLVM_SANITIZE=1 to sanitize the LLVM library as well. Note that these options incur a high performance and memory cost. For example, using ASAN for Julia and LLVM makes testall1 takes 8-10 times as long while using 20 times as much memory (this can be reduced to respectively a factor of 3 and 4 by using the options described below).

By default, Julia sets the allow_user_segv_handler=1 ASAN flag, which is required for signal delivery to work properly. You can define other options using the ASAN_OPTIONS environment flag, in which case you'll need to repeat the default option mentioned before. For example, memory usage can be reduced by specifying fast_unwind_on_malloc=0 and malloc_context_size=2, at the cost of backtrace accuracy. For now, Julia also sets detect_leaks=0, but this should be removed in the future.

Memory Sanitizer (MSAN)

For detecting use of uninitialized memory, you can use Clang's memory sanitizer (MSAN) by compiling with SANITIZE_MEMORY=1.

+

Sanitizer support

Sanitizer support

General considerations

Using Clang's sanitizers obviously require you to use Clang (USECLANG=1), but there's another catch: most sanitizers require a run-time library, provided by the host compiler, while the instrumented code generated by Julia's JIT relies on functionality from that library. This implies that the LLVM version of your host compiler matches that of the LLVM library used within Julia.

An easy solution is to have an dedicated build folder for providing a matching toolchain, by building with BUILD_LLVM_CLANG=1. You can then refer to this toolchain from another build folder by specifying USECLANG=1 while overriding the CC and CXX variables.

Address Sanitizer (ASAN)

For detecting or debugging memory bugs, you can use Clang's address sanitizer (ASAN). By compiling with SANITIZE=1 you enable ASAN for the Julia compiler and its generated code. In addition, you can specify LLVM_SANITIZE=1 to sanitize the LLVM library as well. Note that these options incur a high performance and memory cost. For example, using ASAN for Julia and LLVM makes testall1 takes 8-10 times as long while using 20 times as much memory (this can be reduced to respectively a factor of 3 and 4 by using the options described below).

By default, Julia sets the allow_user_segv_handler=1 ASAN flag, which is required for signal delivery to work properly. You can define other options using the ASAN_OPTIONS environment flag, in which case you'll need to repeat the default option mentioned before. For example, memory usage can be reduced by specifying fast_unwind_on_malloc=0 and malloc_context_size=2, at the cost of backtrace accuracy. For now, Julia also sets detect_leaks=0, but this should be removed in the future.

Memory Sanitizer (MSAN)

For detecting use of uninitialized memory, you can use Clang's memory sanitizer (MSAN) by compiling with SANITIZE_MEMORY=1.

diff --git a/en/stable/devdocs/stdio/index.html b/en/stable/devdocs/stdio/index.html index 31b625835e08b..dcab197484d36 100644 --- a/en/stable/devdocs/stdio/index.html +++ b/en/stable/devdocs/stdio/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

printf() and stdio in the Julia runtime

printf() and stdio in the Julia runtime

Libuv wrappers for stdio

julia.h defines libuv wrappers for the stdio.h streams:

uv_stream_t *JL_STDIN;
+

printf() and stdio in the Julia runtime

printf() and stdio in the Julia runtime

Libuv wrappers for stdio

julia.h defines libuv wrappers for the stdio.h streams:

uv_stream_t *JL_STDIN;
 uv_stream_t *JL_STDOUT;
 uv_stream_t *JL_STDERR;

... and corresponding output functions:

int jl_printf(uv_stream_t *s, const char *format, ...);
 int jl_vprintf(uv_stream_t *s, const char *format, va_list args);

These printf functions are used by the .c files in the src/ and ui/ directories wherever stdio is needed to ensure that output buffering is handled in a unified way.

In special cases, like signal handlers, where the full libuv infrastructure is too heavy, jl_safe_printf() can be used to write(2) directly to STDERR_FILENO:

void jl_safe_printf(const char *str, ...);

Interface between JL_STD* and Julia code

Base.stdin, Base.stdout and Base.stderr are bound to the JL_STD* libuv streams defined in the runtime.

Julia's __init__() function (in base/sysimg.jl) calls reinit_stdio() (in base/stream.jl) to create Julia objects for Base.stdin, Base.stdout and Base.stderr.

reinit_stdio() uses ccall to retrieve pointers to JL_STD* and calls jl_uv_handle_type() to inspect the type of each stream. It then creates a Julia Base.IOStream, Base.TTY or Base.PipeEndpoint object to represent each stream, e.g.:

$ julia -e 'println(typeof((stdin, stdout, stderr)))'
diff --git a/en/stable/devdocs/subarrays/index.html b/en/stable/devdocs/subarrays/index.html
index dd3681860d21f..3f9c02ae6d3d5 100644
--- a/en/stable/devdocs/subarrays/index.html
+++ b/en/stable/devdocs/subarrays/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

SubArrays

SubArrays

Julia's SubArray type is a container encoding a "view" of a parent AbstractArray. This page documents some of the design principles and implementation of SubArrays.

Indexing: cartesian vs. linear indexing

Broadly speaking, there are two main ways to access data in an array. The first, often called cartesian indexing, uses N indices for an N -dimensional AbstractArray. For example, a matrix A (2-dimensional) can be indexed in cartesian style as A[i,j]. The second indexing method, referred to as linear indexing, uses a single index even for higher-dimensional objects. For example, if A = reshape(1:12, 3, 4), then the expression A[5] returns the value 5. Julia allows you to combine these styles of indexing: for example, a 3d array A3 can be indexed as A3[i,j], in which case i is interpreted as a cartesian index for the first dimension, and j is a linear index over dimensions 2 and 3.

For Arrays, linear indexing appeals to the underlying storage format: an array is laid out as a contiguous block of memory, and hence the linear index is just the offset (+1) of the corresponding entry relative to the beginning of the array. However, this is not true for many other AbstractArray types: examples include SparseMatrixCSC from the SparseArrays standard library module, arrays that require some kind of computation (such as interpolation), and the type under discussion here, SubArray. For these types, the underlying information is more naturally described in terms of cartesian indices.

The getindex and setindex! functions for AbstractArray types may include automatic conversion between indexing types. For explicit conversion, CartesianIndices can be used.

While converting from a cartesian index to a linear index is fast (it's just multiplication and addition), converting from a linear index to a cartesian index is very slow: it relies on the div operation, which is one of the slowest low-level operations you can perform with a CPU. For this reason, any code that deals with AbstractArray types is best designed in terms of cartesian, rather than linear, indexing.

Index replacement

Consider making 2d slices of a 3d array:

julia> A = rand(2,3,4);
+

SubArrays

SubArrays

Julia's SubArray type is a container encoding a "view" of a parent AbstractArray. This page documents some of the design principles and implementation of SubArrays.

Indexing: cartesian vs. linear indexing

Broadly speaking, there are two main ways to access data in an array. The first, often called cartesian indexing, uses N indices for an N -dimensional AbstractArray. For example, a matrix A (2-dimensional) can be indexed in cartesian style as A[i,j]. The second indexing method, referred to as linear indexing, uses a single index even for higher-dimensional objects. For example, if A = reshape(1:12, 3, 4), then the expression A[5] returns the value 5. Julia allows you to combine these styles of indexing: for example, a 3d array A3 can be indexed as A3[i,j], in which case i is interpreted as a cartesian index for the first dimension, and j is a linear index over dimensions 2 and 3.

For Arrays, linear indexing appeals to the underlying storage format: an array is laid out as a contiguous block of memory, and hence the linear index is just the offset (+1) of the corresponding entry relative to the beginning of the array. However, this is not true for many other AbstractArray types: examples include SparseMatrixCSC from the SparseArrays standard library module, arrays that require some kind of computation (such as interpolation), and the type under discussion here, SubArray. For these types, the underlying information is more naturally described in terms of cartesian indices.

The getindex and setindex! functions for AbstractArray types may include automatic conversion between indexing types. For explicit conversion, CartesianIndices can be used.

While converting from a cartesian index to a linear index is fast (it's just multiplication and addition), converting from a linear index to a cartesian index is very slow: it relies on the div operation, which is one of the slowest low-level operations you can perform with a CPU. For this reason, any code that deals with AbstractArray types is best designed in terms of cartesian, rather than linear, indexing.

Index replacement

Consider making 2d slices of a 3d array:

julia> A = rand(2,3,4);
 
 julia> S1 = view(A, :, 1, 2:3)
 2×2 view(::Array{Float64,3}, :, 1, 2:3) with eltype Float64:
diff --git a/en/stable/devdocs/sysimg/index.html b/en/stable/devdocs/sysimg/index.html
index 8c6c8daed8835..6fb102cb1b343 100644
--- a/en/stable/devdocs/sysimg/index.html
+++ b/en/stable/devdocs/sysimg/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

System Image Building

System Image Building

Building the Julia system image

Julia ships with a preparsed system image containing the contents of the Base module, named sys.ji. This file is also precompiled into a shared library called sys.{so,dll,dylib} on as many platforms as possible, so as to give vastly improved startup times. On systems that do not ship with a precompiled system image file, one can be generated from the source files shipped in Julia's DATAROOTDIR/julia/base folder.

This operation is useful for multiple reasons. A user may:

  • Build a precompiled shared library system image on a platform that did not ship with one, thereby improving startup times.
  • Modify Base, rebuild the system image and use the new Base next time Julia is started.
  • Include a userimg.jl file that includes packages into the system image, thereby creating a system image that has packages embedded into the startup environment.

Julia now ships with a script that automates the tasks of building the system image, wittingly named build_sysimg.jl that lives in DATAROOTDIR/julia/. That is, to include it into a current Julia session, type:

include(joinpath(Sys.BINDIR, Base.DATAROOTDIR, "julia", "build_sysimg.jl"))

This will include a build_sysimg function:

build_sysimg(sysimg_path=default_sysimg_path(), cpu_target="native", userimg_path=nothing; force=false)

Rebuild the system image. Store it in sysimg_path, which defaults to a file named sys.ji that sits in the same folder as libjulia.{so,dylib}, except on Windows where it defaults to Sys.BINDIR/../lib/julia/sys.ji. Use the cpu instruction set given by cpu_target. Valid CPU targets are the same as for the -C option to julia, or the -march option to gcc. Defaults to native, which means to use all CPU instructions available on the current processor. Include the user image file given by userimg_path, which should contain directives such as using MyPackage to include that package in the new system image. New system image will not replace an older image unless force is set to true.

source

Note that this file can also be run as a script itself, with command line arguments taking the place of arguments passed to the build_sysimg function. For example, to build a system image in /tmp/sys.{so,dll,dylib}, with the core2 CPU instruction set, a user image of ~/userimg.jl and force set to true, one would execute:

julia build_sysimg.jl /tmp/sys core2 ~/userimg.jl --force

System image optimized for multiple microarchitectures

The system image can be compiled simultaneously for multiple CPU microarchitectures under the same instruction set architecture (ISA). Multiple versions of the same function may be created with minimum dispatch point inserted into shared functions in order to take advantage of different ISA extensions or other microarchitecture features. The version that offers the best performance will be selected automatically at runtime based on available features.

Specifying multiple system image targets

Multi-microarch system image can be enabled by passing multiple targets during system image compilation. This can be done either with the JULIA_CPU_TARGET make option or with the -C command line option when running the compilation command manually. Multiple targets are separated by ; in the option. The syntax for each target is a CPU name followed by multiple features separated by ,. All features supported by LLVM is supported and a feature can be disabled with a - prefix. (+ prefix is also allowed and ignored to be consistent with LLVM syntax). Additionally, a few special features are supported to control the function cloning behavior.

  1. clone_all

    By default, only functions that are the most likely to benefit from the microarchitecture features will be cloned. When clone_all is specified for a target, however, all functions in the system image will be cloned for the target. The negative form -clone_all can be used to prevent the built-in heuristic from cloning all functions.

  2. base(<n>)

    Where <n> is a placeholder for a non-negative number (e.g. base(0), base(1)). By default, a partially cloned (i.e. not clone_all) target will use functions from the default target (first one specified) if a function is not cloned. This behavior can be changed by specifying a different base with the base(<n>) option. The nth target (0-based) will be used as the base target instead of the default (0th) one. The base target has to be either 0 or another clone_all target. Specifying a non default clone_all target as the base target will cause an error.

  3. opt_size

    This cause the function for the targe to be optimize for size when there isn't a significant runtime performance impact. This corresponds to -Os GCC and Clang option.

  4. min_size

    This cause the function for the targe to be optimize for size that might have a significant runtime performance impact. This corresponds to -Oz Clang option.

Implementation overview

This is a brief overview of different part involved in the implementation. See code comments for each components for more implementation details.

  1. System image compilation

    The parsing and cloning decision are done in src/processor*. We currently support cloning of function based on the present of loops, simd instructions, or other math operations (e.g. fastmath, fma, muladd). This information is passed on to src/llvm-multiversioning.cpp which does the actual cloning. In addition to doing the cloning and insert dispatch slots (see comments in MultiVersioning::runOnModule for how this is done), the pass also generates metadata so that the runtime can load and initialize the system image correctly. A detail description of the metadata is available in src/processor.h.

  2. System image loading

    The loading and initialization of the system image is done in src/processor* by parsing the metadata saved during system image generation. Host feature detection and selection decision are done in src/processor_*.cpp depending on the ISA. The target selection will prefer exact CPU name match, larger vector register size, and larget number of features. An overview of this process is in src/processor.cpp.

+

System Image Building

System Image Building

Building the Julia system image

Julia ships with a preparsed system image containing the contents of the Base module, named sys.ji. This file is also precompiled into a shared library called sys.{so,dll,dylib} on as many platforms as possible, so as to give vastly improved startup times. On systems that do not ship with a precompiled system image file, one can be generated from the source files shipped in Julia's DATAROOTDIR/julia/base folder.

This operation is useful for multiple reasons. A user may:

  • Build a precompiled shared library system image on a platform that did not ship with one, thereby improving startup times.
  • Modify Base, rebuild the system image and use the new Base next time Julia is started.
  • Include a userimg.jl file that includes packages into the system image, thereby creating a system image that has packages embedded into the startup environment.

Julia now ships with a script that automates the tasks of building the system image, wittingly named build_sysimg.jl that lives in DATAROOTDIR/julia/. That is, to include it into a current Julia session, type:

include(joinpath(Sys.BINDIR, Base.DATAROOTDIR, "julia", "build_sysimg.jl"))

This will include a build_sysimg function:

build_sysimg(sysimg_path=default_sysimg_path(), cpu_target="native", userimg_path=nothing; force=false)

Rebuild the system image. Store it in sysimg_path, which defaults to a file named sys.ji that sits in the same folder as libjulia.{so,dylib}, except on Windows where it defaults to Sys.BINDIR/../lib/julia/sys.ji. Use the cpu instruction set given by cpu_target. Valid CPU targets are the same as for the -C option to julia, or the -march option to gcc. Defaults to native, which means to use all CPU instructions available on the current processor. Include the user image file given by userimg_path, which should contain directives such as using MyPackage to include that package in the new system image. New system image will not replace an older image unless force is set to true.

source

Note that this file can also be run as a script itself, with command line arguments taking the place of arguments passed to the build_sysimg function. For example, to build a system image in /tmp/sys.{so,dll,dylib}, with the core2 CPU instruction set, a user image of ~/userimg.jl and force set to true, one would execute:

julia build_sysimg.jl /tmp/sys core2 ~/userimg.jl --force

System image optimized for multiple microarchitectures

The system image can be compiled simultaneously for multiple CPU microarchitectures under the same instruction set architecture (ISA). Multiple versions of the same function may be created with minimum dispatch point inserted into shared functions in order to take advantage of different ISA extensions or other microarchitecture features. The version that offers the best performance will be selected automatically at runtime based on available features.

Specifying multiple system image targets

Multi-microarch system image can be enabled by passing multiple targets during system image compilation. This can be done either with the JULIA_CPU_TARGET make option or with the -C command line option when running the compilation command manually. Multiple targets are separated by ; in the option. The syntax for each target is a CPU name followed by multiple features separated by ,. All features supported by LLVM is supported and a feature can be disabled with a - prefix. (+ prefix is also allowed and ignored to be consistent with LLVM syntax). Additionally, a few special features are supported to control the function cloning behavior.

  1. clone_all

    By default, only functions that are the most likely to benefit from the microarchitecture features will be cloned. When clone_all is specified for a target, however, all functions in the system image will be cloned for the target. The negative form -clone_all can be used to prevent the built-in heuristic from cloning all functions.

  2. base(<n>)

    Where <n> is a placeholder for a non-negative number (e.g. base(0), base(1)). By default, a partially cloned (i.e. not clone_all) target will use functions from the default target (first one specified) if a function is not cloned. This behavior can be changed by specifying a different base with the base(<n>) option. The nth target (0-based) will be used as the base target instead of the default (0th) one. The base target has to be either 0 or another clone_all target. Specifying a non default clone_all target as the base target will cause an error.

  3. opt_size

    This cause the function for the targe to be optimize for size when there isn't a significant runtime performance impact. This corresponds to -Os GCC and Clang option.

  4. min_size

    This cause the function for the targe to be optimize for size that might have a significant runtime performance impact. This corresponds to -Oz Clang option.

Implementation overview

This is a brief overview of different part involved in the implementation. See code comments for each components for more implementation details.

  1. System image compilation

    The parsing and cloning decision are done in src/processor*. We currently support cloning of function based on the present of loops, simd instructions, or other math operations (e.g. fastmath, fma, muladd). This information is passed on to src/llvm-multiversioning.cpp which does the actual cloning. In addition to doing the cloning and insert dispatch slots (see comments in MultiVersioning::runOnModule for how this is done), the pass also generates metadata so that the runtime can load and initialize the system image correctly. A detail description of the metadata is available in src/processor.h.

  2. System image loading

    The loading and initialization of the system image is done in src/processor* by parsing the metadata saved during system image generation. Host feature detection and selection decision are done in src/processor_*.cpp depending on the ISA. The target selection will prefer exact CPU name match, larger vector register size, and larget number of features. An overview of this process is in src/processor.cpp.

diff --git a/en/stable/devdocs/types/index.html b/en/stable/devdocs/types/index.html index dec5cdb9bbcab..85fdd687ad383 100644 --- a/en/stable/devdocs/types/index.html +++ b/en/stable/devdocs/types/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

More about types

More about types

If you've used Julia for a while, you understand the fundamental role that types play. Here we try to get under the hood, focusing particularly on Parametric Types.

Types and sets (and Any and Union{}/Bottom)

It's perhaps easiest to conceive of Julia's type system in terms of sets. While programs manipulate individual values, a type refers to a set of values. This is not the same thing as a collection; for example a Set of values is itself a single Set value. Rather, a type describes a set of possible values, expressing uncertainty about which value we have.

A concrete type T describes the set of values whose direct tag, as returned by the typeof function, is T. An abstract type describes some possibly-larger set of values.

Any describes the entire universe of possible values. Integer is a subset of Any that includes Int, Int8, and other concrete types. Internally, Julia also makes heavy use of another type known as Bottom, which can also be written as Union{}. This corresponds to the empty set.

Julia's types support the standard operations of set theory: you can ask whether T1 is a "subset" (subtype) of T2 with T1 <: T2. Likewise, you intersect two types using typeintersect, take their union with Union, and compute a type that contains their union with typejoin:

julia> typeintersect(Int, Float64)
+

More about types

More about types

If you've used Julia for a while, you understand the fundamental role that types play. Here we try to get under the hood, focusing particularly on Parametric Types.

Types and sets (and Any and Union{}/Bottom)

It's perhaps easiest to conceive of Julia's type system in terms of sets. While programs manipulate individual values, a type refers to a set of values. This is not the same thing as a collection; for example a Set of values is itself a single Set value. Rather, a type describes a set of possible values, expressing uncertainty about which value we have.

A concrete type T describes the set of values whose direct tag, as returned by the typeof function, is T. An abstract type describes some possibly-larger set of values.

Any describes the entire universe of possible values. Integer is a subset of Any that includes Int, Int8, and other concrete types. Internally, Julia also makes heavy use of another type known as Bottom, which can also be written as Union{}. This corresponds to the empty set.

Julia's types support the standard operations of set theory: you can ask whether T1 is a "subset" (subtype) of T2 with T1 <: T2. Likewise, you intersect two types using typeintersect, take their union with Union, and compute a type that contains their union with typejoin:

julia> typeintersect(Int, Float64)
 Union{}
 
 julia> Union{Int, Float64}
diff --git a/en/stable/devdocs/valgrind/index.html b/en/stable/devdocs/valgrind/index.html
index ba1f9d5b2c033..618795ea92371 100644
--- a/en/stable/devdocs/valgrind/index.html
+++ b/en/stable/devdocs/valgrind/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Using Valgrind with Julia

Using Valgrind with Julia

Valgrind is a tool for memory debugging, memory leak detection, and profiling. This section describes things to keep in mind when using Valgrind to debug memory issues with Julia.

General considerations

By default, Valgrind assumes that there is no self modifying code in the programs it runs. This assumption works fine in most instances but fails miserably for a just-in-time compiler like julia. For this reason it is crucial to pass --smc-check=all-non-file to valgrind, else code may crash or behave unexpectedly (often in subtle ways).

In some cases, to better detect memory errors using Valgrind it can help to compile julia with memory pools disabled. The compile-time flag MEMDEBUG disables memory pools in Julia, and MEMDEBUG2 disables memory pools in FemtoLisp. To build julia with both flags, add the following line to Make.user:

CFLAGS = -DMEMDEBUG -DMEMDEBUG2

Another thing to note: if your program uses multiple workers processes, it is likely that you want all such worker processes to run under Valgrind, not just the parent process. To do this, pass --trace-children=yes to valgrind.

Suppressions

Valgrind will typically display spurious warnings as it runs. To reduce the number of such warnings, it helps to provide a suppressions file to Valgrind. A sample suppressions file is included in the Julia source distribution at contrib/valgrind-julia.supp.

The suppressions file can be used from the julia/ source directory as follows:

$ valgrind --smc-check=all-non-file --suppressions=contrib/valgrind-julia.supp ./julia progname.jl

Any memory errors that are displayed should either be reported as bugs or contributed as additional suppressions. Note that some versions of Valgrind are shipped with insufficient default suppressions, so that may be one thing to consider before submitting any bugs.

Running the Julia test suite under Valgrind

It is possible to run the entire Julia test suite under Valgrind, but it does take quite some time (typically several hours). To do so, run the following command from the julia/test/ directory:

valgrind --smc-check=all-non-file --trace-children=yes --suppressions=$PWD/../contrib/valgrind-julia.supp ../julia runtests.jl all

If you would like to see a report of "definite" memory leaks, pass the flags --leak-check=full --show-leak-kinds=definite to valgrind as well.

Caveats

Valgrind currently does not support multiple rounding modes, so code that adjusts the rounding mode will behave differently when run under Valgrind.

In general, if after setting --smc-check=all-non-file you find that your program behaves differently when run under Valgrind, it may help to pass --tool=none to valgrind as you investigate further. This will enable the minimal Valgrind machinery but will also run much faster than when the full memory checker is enabled.

+

Using Valgrind with Julia

Using Valgrind with Julia

Valgrind is a tool for memory debugging, memory leak detection, and profiling. This section describes things to keep in mind when using Valgrind to debug memory issues with Julia.

General considerations

By default, Valgrind assumes that there is no self modifying code in the programs it runs. This assumption works fine in most instances but fails miserably for a just-in-time compiler like julia. For this reason it is crucial to pass --smc-check=all-non-file to valgrind, else code may crash or behave unexpectedly (often in subtle ways).

In some cases, to better detect memory errors using Valgrind it can help to compile julia with memory pools disabled. The compile-time flag MEMDEBUG disables memory pools in Julia, and MEMDEBUG2 disables memory pools in FemtoLisp. To build julia with both flags, add the following line to Make.user:

CFLAGS = -DMEMDEBUG -DMEMDEBUG2

Another thing to note: if your program uses multiple workers processes, it is likely that you want all such worker processes to run under Valgrind, not just the parent process. To do this, pass --trace-children=yes to valgrind.

Suppressions

Valgrind will typically display spurious warnings as it runs. To reduce the number of such warnings, it helps to provide a suppressions file to Valgrind. A sample suppressions file is included in the Julia source distribution at contrib/valgrind-julia.supp.

The suppressions file can be used from the julia/ source directory as follows:

$ valgrind --smc-check=all-non-file --suppressions=contrib/valgrind-julia.supp ./julia progname.jl

Any memory errors that are displayed should either be reported as bugs or contributed as additional suppressions. Note that some versions of Valgrind are shipped with insufficient default suppressions, so that may be one thing to consider before submitting any bugs.

Running the Julia test suite under Valgrind

It is possible to run the entire Julia test suite under Valgrind, but it does take quite some time (typically several hours). To do so, run the following command from the julia/test/ directory:

valgrind --smc-check=all-non-file --trace-children=yes --suppressions=$PWD/../contrib/valgrind-julia.supp ../julia runtests.jl all

If you would like to see a report of "definite" memory leaks, pass the flags --leak-check=full --show-leak-kinds=definite to valgrind as well.

Caveats

Valgrind currently does not support multiple rounding modes, so code that adjusts the rounding mode will behave differently when run under Valgrind.

In general, if after setting --smc-check=all-non-file you find that your program behaves differently when run under Valgrind, it may help to pass --tool=none to valgrind as you investigate further. This will enable the minimal Valgrind machinery but will also run much faster than when the full memory checker is enabled.

diff --git a/en/stable/index.html b/en/stable/index.html index 89061f1a39e66..178f2f67592ae 100644 --- a/en/stable/index.html +++ b/en/stable/index.html @@ -6,4 +6,4 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Home

Julia 1.0 Documentation

Welcome to the documentation for Julia 1.0.

Please read the release blog post for a general overview of the language and many of the changes since Julia v0.6. Note that version 0.7 was released alongside 1.0 to provide an upgrade path for packages and code that predates the 1.0 release. The only difference between 0.7 and 1.0 is the removal of deprecation warnings. For a complete list of all the changes since 0.6, see the release notes for version 0.7

Introduction

Scientific computing has traditionally required the highest performance, yet domain experts have largely moved to slower dynamic languages for daily work. We believe there are many good reasons to prefer dynamic languages for these applications, and we do not expect their use to diminish. Fortunately, modern language design and compiler techniques make it possible to mostly eliminate the performance trade-off and provide a single environment productive enough for prototyping and efficient enough for deploying performance-intensive applications. The Julia programming language fills this role: it is a flexible dynamic language, appropriate for scientific and numerical computing, with performance comparable to traditional statically-typed languages.

Because Julia's compiler is different from the interpreters used for languages like Python or R, you may find that Julia's performance is unintuitive at first. If you find that something is slow, we highly recommend reading through the Performance Tips section before trying anything else. Once you understand how Julia works, it's easy to write code that's nearly as fast as C.

Julia features optional typing, multiple dispatch, and good performance, achieved using type inference and just-in-time (JIT) compilation, implemented using LLVM. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. Julia provides ease and expressiveness for high-level numerical computing, in the same way as languages such as R, MATLAB, and Python, but also supports general programming. To achieve this, Julia builds upon the lineage of mathematical programming languages, but also borrows much from popular dynamic languages, including Lisp, Perl, Python, Lua, and Ruby.

The most significant departures of Julia from typical dynamic languages are:

  • The core language imposes very little; Julia Base and the standard library is written in Julia itself, including primitive operations like integer arithmetic
  • A rich language of types for constructing and describing objects, that can also optionally be used to make type declarations
  • The ability to define function behavior across many combinations of argument types via multiple dispatch
  • Automatic generation of efficient, specialized code for different argument types
  • Good performance, approaching that of statically-compiled languages like C

Although one sometimes speaks of dynamic languages as being "typeless", they are definitely not: every object, whether primitive or user-defined, has a type. The lack of type declarations in most dynamic languages, however, means that one cannot instruct the compiler about the types of values, and often cannot explicitly talk about types at all. In static languages, on the other hand, while one can – and usually must – annotate types for the compiler, types exist only at compile time and cannot be manipulated or expressed at run time. In Julia, types are themselves run-time objects, and can also be used to convey information to the compiler.

While the casual programmer need not explicitly use types or multiple dispatch, they are the core unifying features of Julia: functions are defined on different combinations of argument types, and applied by dispatching to the most specific matching definition. This model is a good fit for mathematical programming, where it is unnatural for the first argument to "own" an operation as in traditional object-oriented dispatch. Operators are just functions with special notation – to extend addition to new user-defined data types, you define new methods for the + function. Existing code then seamlessly applies to the new data types.

Partly because of run-time type inference (augmented by optional type annotations), and partly because of a strong focus on performance from the inception of the project, Julia's computational efficiency exceeds that of other dynamic languages, and even rivals that of statically-compiled languages. For large scale numerical problems, speed always has been, continues to be, and probably always will be crucial: the amount of data being processed has easily kept pace with Moore's Law over the past decades.

Julia aims to create an unprecedented combination of ease-of-use, power, and efficiency in a single language. In addition to the above, some advantages of Julia over comparable systems include:

  • Free and open source (MIT licensed)
  • User-defined types are as fast and compact as built-ins
  • No need to vectorize code for performance; devectorized code is fast
  • Designed for parallelism and distributed computation
  • Lightweight "green" threading (coroutines)
  • Unobtrusive yet powerful type system
  • Elegant and extensible conversions and promotions for numeric and other types
  • Efficient support for Unicode, including but not limited to UTF-8
  • Call C functions directly (no wrappers or special APIs needed)
  • Powerful shell-like capabilities for managing other processes
  • Lisp-like macros and other metaprogramming facilities
+

Home

Julia 1.0 Documentation

Welcome to the documentation for Julia 1.0.

Please read the release blog post for a general overview of the language and many of the changes since Julia v0.6. Note that version 0.7 was released alongside 1.0 to provide an upgrade path for packages and code that predates the 1.0 release. The only difference between 0.7 and 1.0 is the removal of deprecation warnings. For a complete list of all the changes since 0.6, see the release notes for version 0.7

Introduction

Scientific computing has traditionally required the highest performance, yet domain experts have largely moved to slower dynamic languages for daily work. We believe there are many good reasons to prefer dynamic languages for these applications, and we do not expect their use to diminish. Fortunately, modern language design and compiler techniques make it possible to mostly eliminate the performance trade-off and provide a single environment productive enough for prototyping and efficient enough for deploying performance-intensive applications. The Julia programming language fills this role: it is a flexible dynamic language, appropriate for scientific and numerical computing, with performance comparable to traditional statically-typed languages.

Because Julia's compiler is different from the interpreters used for languages like Python or R, you may find that Julia's performance is unintuitive at first. If you find that something is slow, we highly recommend reading through the Performance Tips section before trying anything else. Once you understand how Julia works, it's easy to write code that's nearly as fast as C.

Julia features optional typing, multiple dispatch, and good performance, achieved using type inference and just-in-time (JIT) compilation, implemented using LLVM. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. Julia provides ease and expressiveness for high-level numerical computing, in the same way as languages such as R, MATLAB, and Python, but also supports general programming. To achieve this, Julia builds upon the lineage of mathematical programming languages, but also borrows much from popular dynamic languages, including Lisp, Perl, Python, Lua, and Ruby.

The most significant departures of Julia from typical dynamic languages are:

  • The core language imposes very little; Julia Base and the standard library is written in Julia itself, including primitive operations like integer arithmetic
  • A rich language of types for constructing and describing objects, that can also optionally be used to make type declarations
  • The ability to define function behavior across many combinations of argument types via multiple dispatch
  • Automatic generation of efficient, specialized code for different argument types
  • Good performance, approaching that of statically-compiled languages like C

Although one sometimes speaks of dynamic languages as being "typeless", they are definitely not: every object, whether primitive or user-defined, has a type. The lack of type declarations in most dynamic languages, however, means that one cannot instruct the compiler about the types of values, and often cannot explicitly talk about types at all. In static languages, on the other hand, while one can – and usually must – annotate types for the compiler, types exist only at compile time and cannot be manipulated or expressed at run time. In Julia, types are themselves run-time objects, and can also be used to convey information to the compiler.

While the casual programmer need not explicitly use types or multiple dispatch, they are the core unifying features of Julia: functions are defined on different combinations of argument types, and applied by dispatching to the most specific matching definition. This model is a good fit for mathematical programming, where it is unnatural for the first argument to "own" an operation as in traditional object-oriented dispatch. Operators are just functions with special notation – to extend addition to new user-defined data types, you define new methods for the + function. Existing code then seamlessly applies to the new data types.

Partly because of run-time type inference (augmented by optional type annotations), and partly because of a strong focus on performance from the inception of the project, Julia's computational efficiency exceeds that of other dynamic languages, and even rivals that of statically-compiled languages. For large scale numerical problems, speed always has been, continues to be, and probably always will be crucial: the amount of data being processed has easily kept pace with Moore's Law over the past decades.

Julia aims to create an unprecedented combination of ease-of-use, power, and efficiency in a single language. In addition to the above, some advantages of Julia over comparable systems include:

  • Free and open source (MIT licensed)
  • User-defined types are as fast and compact as built-ins
  • No need to vectorize code for performance; devectorized code is fast
  • Designed for parallelism and distributed computation
  • Lightweight "green" threading (coroutines)
  • Unobtrusive yet powerful type system
  • Elegant and extensible conversions and promotions for numeric and other types
  • Efficient support for Unicode, including but not limited to UTF-8
  • Call C functions directly (no wrappers or special APIs needed)
  • Powerful shell-like capabilities for managing other processes
  • Lisp-like macros and other metaprogramming facilities
diff --git a/en/stable/manual/arrays/index.html b/en/stable/manual/arrays/index.html index d15daa63bcb8d..9b1df56a3b848 100644 --- a/en/stable/manual/arrays/index.html +++ b/en/stable/manual/arrays/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Multi-dimensional Arrays

Multi-dimensional Arrays

Julia, like most technical computing languages, provides a first-class array implementation. Most technical computing languages pay a lot of attention to their array implementation at the expense of other containers. Julia does not treat arrays in any special way. The array library is implemented almost completely in Julia itself, and derives its performance from the compiler, just like any other code written in Julia. As such, it's also possible to define custom array types by inheriting from AbstractArray. See the manual section on the AbstractArray interface for more details on implementing a custom array type.

An array is a collection of objects stored in a multi-dimensional grid. In the most general case, an array may contain objects of type Any. For most computational purposes, arrays should contain objects of a more specific type, such as Float64 or Int32.

In general, unlike many other technical computing languages, Julia does not expect programs to be written in a vectorized style for performance. Julia's compiler uses type inference and generates optimized code for scalar array indexing, allowing programs to be written in a style that is convenient and readable, without sacrificing performance, and using less memory at times.

In Julia, all arguments to functions are passed by sharing (i.e. by pointers). Some technical computing languages pass arrays by value, and while this prevents accidental modification by callees of a value in the caller, it makes avoiding unwanted copying of arrays difficult. By convention, a function name ending with a ! indicates that it will mutate or destroy the value of one or more of its arguments (see, for example, sort and sort!. Callees must make explicit copies to ensure that they don't modify inputs that they don't intend to change. Many non- mutating functions are implemented by calling a function of the same name with an added ! at the end on an explicit copy of the input, and returning that copy.

Basic Functions

FunctionDescription
eltype(A)the type of the elements contained in A
length(A)the number of elements in A
ndims(A)the number of dimensions of A
size(A)a tuple containing the dimensions of A
size(A,n)the size of A along dimension n
axes(A)a tuple containing the valid indices of A
axes(A,n)a range expressing the valid indices along dimension n
eachindex(A)an efficient iterator for visiting each position in A
stride(A,k)the stride (linear index distance between adjacent elements) along dimension k
strides(A)a tuple of the strides in each dimension

Construction and Initialization

Many functions for constructing and initializing arrays are provided. In the following list of such functions, calls with a dims... argument can either take a single tuple of dimension sizes or a series of dimension sizes passed as a variable number of arguments. Most of these functions also accept a first input T, which is the element type of the array. If the type T is omitted it will default to Float64.

FunctionDescription
Array{T}(undef, dims...)an uninitialized dense Array
zeros(T, dims...)an Array of all zeros
ones(T, dims...)an Array of all ones
trues(dims...)a BitArray with all values true
falses(dims...)a BitArray with all values false
reshape(A, dims...)an array containing the same data as A, but with different dimensions
copy(A)copy A
deepcopy(A)copy A, recursively copying its elements
similar(A, T, dims...)an uninitialized array of the same type as A (dense, sparse, etc.), but with the specified element type and dimensions. The second and third arguments are both optional, defaulting to the element type and dimensions of A if omitted.
reinterpret(T, A)an array with the same binary data as A, but with element type T
rand(T, dims...)an Array with random, iid [1] and uniformly distributed values in the half-open interval $[0, 1)$
randn(T, dims...)an Array with random, iid and standard normally distributed values
Matrix{T}(I, m, n)m-by-n identity matrix
range(start, stop=stop, length=n)range of n linearly spaced elements from start to stop
fill!(A, x)fill the array A with the value x
fill(x, dims...)an Array filled with the value x
[1]

iid, independently and identically distributed.

The syntax [A, B, C, ...] constructs a 1-d array (vector) of its arguments. If all arguments have a common promotion type then they get converted to that type using convert.

To see the various ways we can pass dimensions to these constructors, consider the following examples:

julia> zeros(Int8, 2, 2)
+

Multi-dimensional Arrays

Multi-dimensional Arrays

Julia, like most technical computing languages, provides a first-class array implementation. Most technical computing languages pay a lot of attention to their array implementation at the expense of other containers. Julia does not treat arrays in any special way. The array library is implemented almost completely in Julia itself, and derives its performance from the compiler, just like any other code written in Julia. As such, it's also possible to define custom array types by inheriting from AbstractArray. See the manual section on the AbstractArray interface for more details on implementing a custom array type.

An array is a collection of objects stored in a multi-dimensional grid. In the most general case, an array may contain objects of type Any. For most computational purposes, arrays should contain objects of a more specific type, such as Float64 or Int32.

In general, unlike many other technical computing languages, Julia does not expect programs to be written in a vectorized style for performance. Julia's compiler uses type inference and generates optimized code for scalar array indexing, allowing programs to be written in a style that is convenient and readable, without sacrificing performance, and using less memory at times.

In Julia, all arguments to functions are passed by sharing (i.e. by pointers). Some technical computing languages pass arrays by value, and while this prevents accidental modification by callees of a value in the caller, it makes avoiding unwanted copying of arrays difficult. By convention, a function name ending with a ! indicates that it will mutate or destroy the value of one or more of its arguments (see, for example, sort and sort!. Callees must make explicit copies to ensure that they don't modify inputs that they don't intend to change. Many non- mutating functions are implemented by calling a function of the same name with an added ! at the end on an explicit copy of the input, and returning that copy.

Basic Functions

FunctionDescription
eltype(A)the type of the elements contained in A
length(A)the number of elements in A
ndims(A)the number of dimensions of A
size(A)a tuple containing the dimensions of A
size(A,n)the size of A along dimension n
axes(A)a tuple containing the valid indices of A
axes(A,n)a range expressing the valid indices along dimension n
eachindex(A)an efficient iterator for visiting each position in A
stride(A,k)the stride (linear index distance between adjacent elements) along dimension k
strides(A)a tuple of the strides in each dimension

Construction and Initialization

Many functions for constructing and initializing arrays are provided. In the following list of such functions, calls with a dims... argument can either take a single tuple of dimension sizes or a series of dimension sizes passed as a variable number of arguments. Most of these functions also accept a first input T, which is the element type of the array. If the type T is omitted it will default to Float64.

FunctionDescription
Array{T}(undef, dims...)an uninitialized dense Array
zeros(T, dims...)an Array of all zeros
ones(T, dims...)an Array of all ones
trues(dims...)a BitArray with all values true
falses(dims...)a BitArray with all values false
reshape(A, dims...)an array containing the same data as A, but with different dimensions
copy(A)copy A
deepcopy(A)copy A, recursively copying its elements
similar(A, T, dims...)an uninitialized array of the same type as A (dense, sparse, etc.), but with the specified element type and dimensions. The second and third arguments are both optional, defaulting to the element type and dimensions of A if omitted.
reinterpret(T, A)an array with the same binary data as A, but with element type T
rand(T, dims...)an Array with random, iid [1] and uniformly distributed values in the half-open interval $[0, 1)$
randn(T, dims...)an Array with random, iid and standard normally distributed values
Matrix{T}(I, m, n)m-by-n identity matrix
range(start, stop=stop, length=n)range of n linearly spaced elements from start to stop
fill!(A, x)fill the array A with the value x
fill(x, dims...)an Array filled with the value x
[1]

iid, independently and identically distributed.

The syntax [A, B, C, ...] constructs a 1-d array (vector) of its arguments. If all arguments have a common promotion type then they get converted to that type using convert.

To see the various ways we can pass dimensions to these constructors, consider the following examples:

julia> zeros(Int8, 2, 2)
 2×2 Array{Int8,2}:
  0  0
  0  0
diff --git a/en/stable/manual/calling-c-and-fortran-code/index.html b/en/stable/manual/calling-c-and-fortran-code/index.html
index 87556806af45c..8cc6257cedf65 100644
--- a/en/stable/manual/calling-c-and-fortran-code/index.html
+++ b/en/stable/manual/calling-c-and-fortran-code/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Calling C and Fortran Code

Calling C and Fortran Code

Though most code can be written in Julia, there are many high-quality, mature libraries for numerical computing already written in C and Fortran. To allow easy use of this existing code, Julia makes it simple and efficient to call C and Fortran functions. Julia has a "no boilerplate" philosophy: functions can be called directly from Julia without any "glue" code, code generation, or compilation – even from the interactive prompt. This is accomplished just by making an appropriate call with ccall syntax, which looks like an ordinary function call.

The code to be called must be available as a shared library. Most C and Fortran libraries ship compiled as shared libraries already, but if you are compiling the code yourself using GCC (or Clang), you will need to use the -shared and -fPIC options. The machine instructions generated by Julia's JIT are the same as a native C call would be, so the resulting overhead is the same as calling a library function from C code. (Non-library function calls in both C and Julia can be inlined and thus may have even less overhead than calls to shared library functions. When both libraries and executables are generated by LLVM, it is possible to perform whole-program optimizations that can even optimize across this boundary, but Julia does not yet support that. In the future, however, it may do so, yielding even greater performance gains.)

Shared libraries and functions are referenced by a tuple of the form (:function, "library") or ("function", "library") where function is the C-exported function name. library refers to the shared library name: shared libraries available in the (platform-specific) load path will be resolved by name, and if necessary a direct path may be specified.

A function name may be used alone in place of the tuple (just :function or "function"). In this case the name is resolved within the current process. This form can be used to call C library functions, functions in the Julia runtime, or functions in an application linked to Julia.

By default, Fortran compilers generate mangled names (for example, converting function names to lowercase or uppercase, often appending an underscore), and so to call a Fortran function via ccall you must pass the mangled identifier corresponding to the rule followed by your Fortran compiler. Also, when calling a Fortran function, all inputs must be passed as pointers to allocated values on the heap or stack. This applies not only to arrays and other mutable objects which are normally heap-allocated, but also to scalar values such as integers and floats which are normally stack-allocated and commonly passed in registers when using C or Julia calling conventions.

Finally, you can use ccall to actually generate a call to the library function. Arguments to ccall are as follows:

  1. A (:function, "library") pair, which must be written as a literal constant,

    OR

    a function pointer (for example, from dlsym).

  2. Return type (see below for mapping the declared C type to Julia)

    • This argument will be evaluated at compile-time, when the containing method is defined.
  3. A tuple of input types. The input types must be written as a literal tuple, not a tuple-valued variable or expression.

    • This argument will be evaluated at compile-time, when the containing method is defined.
  4. The following arguments, if any, are the actual argument values passed to the function.

As a complete but simple example, the following calls the clock function from the standard C library:

julia> t = ccall((:clock, "libc"), Int32, ())
+

Calling C and Fortran Code

Calling C and Fortran Code

Though most code can be written in Julia, there are many high-quality, mature libraries for numerical computing already written in C and Fortran. To allow easy use of this existing code, Julia makes it simple and efficient to call C and Fortran functions. Julia has a "no boilerplate" philosophy: functions can be called directly from Julia without any "glue" code, code generation, or compilation – even from the interactive prompt. This is accomplished just by making an appropriate call with ccall syntax, which looks like an ordinary function call.

The code to be called must be available as a shared library. Most C and Fortran libraries ship compiled as shared libraries already, but if you are compiling the code yourself using GCC (or Clang), you will need to use the -shared and -fPIC options. The machine instructions generated by Julia's JIT are the same as a native C call would be, so the resulting overhead is the same as calling a library function from C code. (Non-library function calls in both C and Julia can be inlined and thus may have even less overhead than calls to shared library functions. When both libraries and executables are generated by LLVM, it is possible to perform whole-program optimizations that can even optimize across this boundary, but Julia does not yet support that. In the future, however, it may do so, yielding even greater performance gains.)

Shared libraries and functions are referenced by a tuple of the form (:function, "library") or ("function", "library") where function is the C-exported function name. library refers to the shared library name: shared libraries available in the (platform-specific) load path will be resolved by name, and if necessary a direct path may be specified.

A function name may be used alone in place of the tuple (just :function or "function"). In this case the name is resolved within the current process. This form can be used to call C library functions, functions in the Julia runtime, or functions in an application linked to Julia.

By default, Fortran compilers generate mangled names (for example, converting function names to lowercase or uppercase, often appending an underscore), and so to call a Fortran function via ccall you must pass the mangled identifier corresponding to the rule followed by your Fortran compiler. Also, when calling a Fortran function, all inputs must be passed as pointers to allocated values on the heap or stack. This applies not only to arrays and other mutable objects which are normally heap-allocated, but also to scalar values such as integers and floats which are normally stack-allocated and commonly passed in registers when using C or Julia calling conventions.

Finally, you can use ccall to actually generate a call to the library function. Arguments to ccall are as follows:

  1. A (:function, "library") pair, which must be written as a literal constant,

    OR

    a function pointer (for example, from dlsym).

  2. Return type (see below for mapping the declared C type to Julia)

    • This argument will be evaluated at compile-time, when the containing method is defined.
  3. A tuple of input types. The input types must be written as a literal tuple, not a tuple-valued variable or expression.

    • This argument will be evaluated at compile-time, when the containing method is defined.
  4. The following arguments, if any, are the actual argument values passed to the function.

As a complete but simple example, the following calls the clock function from the standard C library:

julia> t = ccall((:clock, "libc"), Int32, ())
 2292761
 
 julia> t
diff --git a/en/stable/manual/code-loading/index.html b/en/stable/manual/code-loading/index.html
index a95733778e877..999279114a45f 100644
--- a/en/stable/manual/code-loading/index.html
+++ b/en/stable/manual/code-loading/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Code Loading

Code Loading

Julia has two mechanisms for loading code:

  1. Code inclusion: e.g. include("source.jl"). Inclusion allows you to split a single program across multiple source files. The expression include("source.jl") causes the contents of the file source.jl to be evaluated in the global scope of the module where the include call occurs. If include("source.jl") is called multiple times, source.jl is evaluated multiple times. The included path, source.jl, is interpreted relative to the file where the include call occurs. This makes it simple to relocate a subtree of source files. In the REPL, included paths are interpreted relative to the current working directory, pwd().
  2. Package loading: e.g. import X or using X. The import mechanism allows you to load a package—i.e. an independent, reusable collection of Julia code, wrapped in a module—and makes the resulting module available by the name X inside of the importing module. If the same X package is imported multiple times in the same Julia session, it is only loaded the first time—on subsequent imports, the importing module gets a reference to the same module. It should be noted, however, that import X can load different packages in different contexts: X can refer to one package named X in the main project but potentially different packages named X in each dependency. More on this below.

Code inclusion is quite straightforward: it simply parses and evaluates a source file in the context of the caller. Package loading is built on top of code inclusion and is quite a bit more complex. The rest of this chapter, therefore, focuses on the behavior and mechanics of package loading.

Note

You only need to read this chapter if you want to understand the technical details of package loading in Julia. If you just want to install and use packages, simply use Julia's built-in package manager to add packages to your environment and write import X or using X in your code to load packages that you've added.

A package is a source tree with a standard layout providing functionality that can be reused by other Julia projects. A package is loaded by import X or using X statements. These statements also make the module named X, which results from loading the package code, available within the module where the import statement occurs. The meaning of X in import X is context-dependent: which X package is loaded depends on what code the statement occurs in. The effect of import X depends on two questions:

  1. What package is X in this context?
  2. Where can that X package be found?

Understanding how Julia answers these questions is key to understanding package loading.

Federation of packages

Julia supports federated management of packages. This means that multiple independent parties can maintain both public and private packages and registries of them, and that projects can depend on a mix of public and private packages from different registries. Packages from various registries are installed and managed using a common set of tools and workflows. The Pkg package manager ships with Julia 0.7/1.0 and lets you install and manage dependencies of your projects, by creating and manipulating project files, which describe what your project depends on, and manifest files that snapshot exact versions of your project's complete dependency graph.

One consequence of federation is that there cannot be a central authority for package naming. Different entities may use the same name to refer to unrelated packages. This possibility is unavoidable since these entities do not coordinate and may not even know about each other. Because of the lack of a central naming authority, a single project can quite possibly end up depending on different packages with the same name. Julia's package loading mechanism handles this by not requiring package names to be globally unique, even within the dependency graph of a single project. Instead, packages are identified by universally unique identifiers (UUIDs) which are assigned to them before they are registered. The question "what is X?" is answered by determining the UUID of X.

Since the decentralized naming problem is somewhat abstract, it may help to walk through a concrete scenario to understand the issue. Suppose you're developing an application called App, which uses two packages: Pub and Priv. Priv is a private package that you created, whereas Pub is a public package that you use but don't control. When you created Priv, there was no public package by that name. Subsequently, however, an unrelated package also named Priv has been published and become popular. In fact, the Pub package has started to use it. Therefore, when you next upgrade Pub to get the latest bug fixes and features, App will end up—through no action of yours other than upgrading—depending on two different packages named Priv. App has a direct dependency on your private Priv package, and an indirect dependency, through Pub, on the new public Priv package. Since these two Priv packages are different but both required for App to continue working correctly, the expression import Priv must refer to different Priv packages depending on whether it occurs in App's code or in Pub's code. Julia's package loading mechanism allows this by distinguishing the two Priv packages by context and UUID. How this distinction works is determined by environments, as explained in the following sections.

Environments

An environment determines what import X and using X mean in various code contexts and what files these statements cause to be loaded. Julia understands three kinds of environments:

  1. A project environment is a directory with a project file and an optional manifest file. The project file determines what the names and identities of the direct dependencies of a project are. The manifest file, if present, gives a complete dependency graph, including all direct and indirect dependencies, exact versions of each dependency, and sufficient information to locate and load the correct version.
  2. A package directory is a directory containing the source trees of a set of packages as subdirectories. This kind of environment was the only kind that existed in Julia 0.6 and earlier. If X is a subdirectory of a package directory and X/src/X.jl exists, then the package X is available in the package directory environment and X/src/X.jl is the source file by which it is loaded.
  3. A stacked environment is an ordered set of project environments and package directories, overlaid to make a single composite environment in which all the packages available in its constituent environments are available. Julia's load path is a stacked environment, for example.

These three kinds of environment each serve a different purpose:

  • Project environments provide reproducibility. By checking a project environment into version control—e.g. a git repository—along with the rest of the project's source code, you can reproduce the exact state of the project and all of its dependencies since the manifest file captures the exact version of every dependency and can be rematerialized easily.
  • Package directories provide low-overhead convenience when a project environment would be overkill: are handy when you have a set of packages and just want to put them somewhere and use them as they are without having to create and maintain a project environment for them.
  • Stacked environments allow for augmentation of the primary environment with additional tools. You can push an environment including development tools onto the stack and they will be available from the REPL and scripts but not from inside of packages.

As an abstraction, an environment provides three maps: roots, graph and paths. When resolving the meaning of import X, roots and graph are used to determine the identity of X and answer the question "what is X?", while the paths map is used to locate the source code of X and answer the question "where is X?" The specific roles of the three maps are:

  • roots: name::Symboluuid::UUID

    An environment's roots map assigns package names to UUIDs for all the top-level dependencies that the environment makes available to the main project (i.e. the ones that can be loaded in Main). When Julia encounters import X in the main project, it looks up the identity of X as roots[:X].

  • graph: context::UUIDname::Symboluuid::UUID

    An environment's graph is a multilevel map which assigns, for each context UUID, a map from names to UUIDs, similar to the roots map but specific to that context. When Julia sees import X in the code of the package whose UUID is context, it looks up the identity of X as graph[context][:X]. In particular, this means that import X can refer to different packages depending on context.

  • paths: uuid::UUID × name::Symbolpath::String

    The paths map assigns to each package UUID-name pair, the location of the entry-point source file of that package. After the identity of X in import X has been resolved to a UUID via roots or graph (depending on whether it is loaded from the main project or an dependency), Julia determines what file to load to acquire X by looking up paths[uuid,:X] in the environment. Including this file should create a module named X. After the first time this package is loaded, any import resolving to the same uuid will simply create a new binding to the same already-loaded package module.

Each kind of environment defines these three maps differently, as detailed in the following sections.

Note

For clarity of exposition, the examples throughout this chapter include fully materialized data structures for roots, graph and paths. However, these maps are really only abstractions—for efficiency, Julia's package loading code does not actually materialize them. Instead, it queries them through internal APIs and lazily computes only as much of each structure as is necessary to load a given package.

Project environments

A project environment is determined by a directory containing a project file, Project.toml, and optionally a manifest file, Manifest.toml. These files can also be named JuliaProject.toml and JuliaManifest.toml, in which case Project.toml and Manifest.toml are ignored; this allows for coexistence with other tools that might consider files named Project.toml and Manifest.toml significant. For pure Julia projects, however, the names Project.toml and Manifest.toml should be preferred. The roots, graph and paths maps of a project environment are defined as follows.

The roots map of the environment is determined by the contents of the project file, specifically, its top-level name and uuid entries and its [deps] section (all optional). Consider the following example project file for the hypothetical application, App, as described above:

name = "App"
+

Code Loading

Code Loading

Julia has two mechanisms for loading code:

  1. Code inclusion: e.g. include("source.jl"). Inclusion allows you to split a single program across multiple source files. The expression include("source.jl") causes the contents of the file source.jl to be evaluated in the global scope of the module where the include call occurs. If include("source.jl") is called multiple times, source.jl is evaluated multiple times. The included path, source.jl, is interpreted relative to the file where the include call occurs. This makes it simple to relocate a subtree of source files. In the REPL, included paths are interpreted relative to the current working directory, pwd().
  2. Package loading: e.g. import X or using X. The import mechanism allows you to load a package—i.e. an independent, reusable collection of Julia code, wrapped in a module—and makes the resulting module available by the name X inside of the importing module. If the same X package is imported multiple times in the same Julia session, it is only loaded the first time—on subsequent imports, the importing module gets a reference to the same module. It should be noted, however, that import X can load different packages in different contexts: X can refer to one package named X in the main project but potentially different packages named X in each dependency. More on this below.

Code inclusion is quite straightforward: it simply parses and evaluates a source file in the context of the caller. Package loading is built on top of code inclusion and is quite a bit more complex. The rest of this chapter, therefore, focuses on the behavior and mechanics of package loading.

Note

You only need to read this chapter if you want to understand the technical details of package loading in Julia. If you just want to install and use packages, simply use Julia's built-in package manager to add packages to your environment and write import X or using X in your code to load packages that you've added.

A package is a source tree with a standard layout providing functionality that can be reused by other Julia projects. A package is loaded by import X or using X statements. These statements also make the module named X, which results from loading the package code, available within the module where the import statement occurs. The meaning of X in import X is context-dependent: which X package is loaded depends on what code the statement occurs in. The effect of import X depends on two questions:

  1. What package is X in this context?
  2. Where can that X package be found?

Understanding how Julia answers these questions is key to understanding package loading.

Federation of packages

Julia supports federated management of packages. This means that multiple independent parties can maintain both public and private packages and registries of them, and that projects can depend on a mix of public and private packages from different registries. Packages from various registries are installed and managed using a common set of tools and workflows. The Pkg package manager ships with Julia 0.7/1.0 and lets you install and manage dependencies of your projects, by creating and manipulating project files, which describe what your project depends on, and manifest files that snapshot exact versions of your project's complete dependency graph.

One consequence of federation is that there cannot be a central authority for package naming. Different entities may use the same name to refer to unrelated packages. This possibility is unavoidable since these entities do not coordinate and may not even know about each other. Because of the lack of a central naming authority, a single project can quite possibly end up depending on different packages with the same name. Julia's package loading mechanism handles this by not requiring package names to be globally unique, even within the dependency graph of a single project. Instead, packages are identified by universally unique identifiers (UUIDs) which are assigned to them before they are registered. The question "what is X?" is answered by determining the UUID of X.

Since the decentralized naming problem is somewhat abstract, it may help to walk through a concrete scenario to understand the issue. Suppose you're developing an application called App, which uses two packages: Pub and Priv. Priv is a private package that you created, whereas Pub is a public package that you use but don't control. When you created Priv, there was no public package by that name. Subsequently, however, an unrelated package also named Priv has been published and become popular. In fact, the Pub package has started to use it. Therefore, when you next upgrade Pub to get the latest bug fixes and features, App will end up—through no action of yours other than upgrading—depending on two different packages named Priv. App has a direct dependency on your private Priv package, and an indirect dependency, through Pub, on the new public Priv package. Since these two Priv packages are different but both required for App to continue working correctly, the expression import Priv must refer to different Priv packages depending on whether it occurs in App's code or in Pub's code. Julia's package loading mechanism allows this by distinguishing the two Priv packages by context and UUID. How this distinction works is determined by environments, as explained in the following sections.

Environments

An environment determines what import X and using X mean in various code contexts and what files these statements cause to be loaded. Julia understands three kinds of environments:

  1. A project environment is a directory with a project file and an optional manifest file. The project file determines what the names and identities of the direct dependencies of a project are. The manifest file, if present, gives a complete dependency graph, including all direct and indirect dependencies, exact versions of each dependency, and sufficient information to locate and load the correct version.
  2. A package directory is a directory containing the source trees of a set of packages as subdirectories. This kind of environment was the only kind that existed in Julia 0.6 and earlier. If X is a subdirectory of a package directory and X/src/X.jl exists, then the package X is available in the package directory environment and X/src/X.jl is the source file by which it is loaded.
  3. A stacked environment is an ordered set of project environments and package directories, overlaid to make a single composite environment in which all the packages available in its constituent environments are available. Julia's load path is a stacked environment, for example.

These three kinds of environment each serve a different purpose:

  • Project environments provide reproducibility. By checking a project environment into version control—e.g. a git repository—along with the rest of the project's source code, you can reproduce the exact state of the project and all of its dependencies since the manifest file captures the exact version of every dependency and can be rematerialized easily.
  • Package directories provide low-overhead convenience when a project environment would be overkill: are handy when you have a set of packages and just want to put them somewhere and use them as they are without having to create and maintain a project environment for them.
  • Stacked environments allow for augmentation of the primary environment with additional tools. You can push an environment including development tools onto the stack and they will be available from the REPL and scripts but not from inside of packages.

As an abstraction, an environment provides three maps: roots, graph and paths. When resolving the meaning of import X, roots and graph are used to determine the identity of X and answer the question "what is X?", while the paths map is used to locate the source code of X and answer the question "where is X?" The specific roles of the three maps are:

  • roots: name::Symboluuid::UUID

    An environment's roots map assigns package names to UUIDs for all the top-level dependencies that the environment makes available to the main project (i.e. the ones that can be loaded in Main). When Julia encounters import X in the main project, it looks up the identity of X as roots[:X].

  • graph: context::UUIDname::Symboluuid::UUID

    An environment's graph is a multilevel map which assigns, for each context UUID, a map from names to UUIDs, similar to the roots map but specific to that context. When Julia sees import X in the code of the package whose UUID is context, it looks up the identity of X as graph[context][:X]. In particular, this means that import X can refer to different packages depending on context.

  • paths: uuid::UUID × name::Symbolpath::String

    The paths map assigns to each package UUID-name pair, the location of the entry-point source file of that package. After the identity of X in import X has been resolved to a UUID via roots or graph (depending on whether it is loaded from the main project or an dependency), Julia determines what file to load to acquire X by looking up paths[uuid,:X] in the environment. Including this file should create a module named X. After the first time this package is loaded, any import resolving to the same uuid will simply create a new binding to the same already-loaded package module.

Each kind of environment defines these three maps differently, as detailed in the following sections.

Note

For clarity of exposition, the examples throughout this chapter include fully materialized data structures for roots, graph and paths. However, these maps are really only abstractions—for efficiency, Julia's package loading code does not actually materialize them. Instead, it queries them through internal APIs and lazily computes only as much of each structure as is necessary to load a given package.

Project environments

A project environment is determined by a directory containing a project file, Project.toml, and optionally a manifest file, Manifest.toml. These files can also be named JuliaProject.toml and JuliaManifest.toml, in which case Project.toml and Manifest.toml are ignored; this allows for coexistence with other tools that might consider files named Project.toml and Manifest.toml significant. For pure Julia projects, however, the names Project.toml and Manifest.toml should be preferred. The roots, graph and paths maps of a project environment are defined as follows.

The roots map of the environment is determined by the contents of the project file, specifically, its top-level name and uuid entries and its [deps] section (all optional). Consider the following example project file for the hypothetical application, App, as described above:

name = "App"
 uuid = "8f986787-14fe-4607-ba5d-fbff2944afa9"
 
 [deps]
diff --git a/en/stable/manual/complex-and-rational-numbers/index.html b/en/stable/manual/complex-and-rational-numbers/index.html
index ed99bbf0b0e9d..77c82329b24f8 100644
--- a/en/stable/manual/complex-and-rational-numbers/index.html
+++ b/en/stable/manual/complex-and-rational-numbers/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Complex and Rational Numbers

Complex and Rational Numbers

Julia ships with predefined types representing both complex and rational numbers, and supports all standard Mathematical Operations and Elementary Functions on them. Conversion and Promotion are defined so that operations on any combination of predefined numeric types, whether primitive or composite, behave as expected.

Complex Numbers

The global constant im is bound to the complex number i, representing the principal square root of -1. It was deemed harmful to co-opt the name i for a global constant, since it is such a popular index variable name. Since Julia allows numeric literals to be juxtaposed with identifiers as coefficients, this binding suffices to provide convenient syntax for complex numbers, similar to the traditional mathematical notation:

julia> 1 + 2im
+

Complex and Rational Numbers

Complex and Rational Numbers

Julia ships with predefined types representing both complex and rational numbers, and supports all standard Mathematical Operations and Elementary Functions on them. Conversion and Promotion are defined so that operations on any combination of predefined numeric types, whether primitive or composite, behave as expected.

Complex Numbers

The global constant im is bound to the complex number i, representing the principal square root of -1. It was deemed harmful to co-opt the name i for a global constant, since it is such a popular index variable name. Since Julia allows numeric literals to be juxtaposed with identifiers as coefficients, this binding suffices to provide convenient syntax for complex numbers, similar to the traditional mathematical notation:

julia> 1 + 2im
 1 + 2im

You can perform all the standard arithmetic operations with complex numbers:

julia> (1 + 2im)*(2 - 3im)
 8 + 1im
 
diff --git a/en/stable/manual/constructors/index.html b/en/stable/manual/constructors/index.html
index 2023c20a97737..7f9d8f80e4a72 100644
--- a/en/stable/manual/constructors/index.html
+++ b/en/stable/manual/constructors/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Constructors

Constructors

Constructors [1] are functions that create new objects – specifically, instances of Composite Types. In Julia, type objects also serve as constructor functions: they create new instances of themselves when applied to an argument tuple as a function. This much was already mentioned briefly when composite types were introduced. For example:

julia> struct Foo
+

Constructors

Constructors

Constructors [1] are functions that create new objects – specifically, instances of Composite Types. In Julia, type objects also serve as constructor functions: they create new instances of themselves when applied to an argument tuple as a function. This much was already mentioned briefly when composite types were introduced. For example:

julia> struct Foo
            bar
            baz
        end
diff --git a/en/stable/manual/control-flow/index.html b/en/stable/manual/control-flow/index.html
index f50dd1da92271..a9a0c9440b628 100644
--- a/en/stable/manual/control-flow/index.html
+++ b/en/stable/manual/control-flow/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Control Flow

Control Flow

Julia provides a variety of control flow constructs:

The first five control flow mechanisms are standard to high-level programming languages. Tasks are not so standard: they provide non-local control flow, making it possible to switch between temporarily-suspended computations. This is a powerful construct: both exception handling and cooperative multitasking are implemented in Julia using tasks. Everyday programming requires no direct usage of tasks, but certain problems can be solved much more easily by using tasks.

Compound Expressions

Sometimes it is convenient to have a single expression which evaluates several subexpressions in order, returning the value of the last subexpression as its value. There are two Julia constructs that accomplish this: begin blocks and (;) chains. The value of both compound expression constructs is that of the last subexpression. Here's an example of a begin block:

julia> z = begin
+

Control Flow

Control Flow

Julia provides a variety of control flow constructs:

The first five control flow mechanisms are standard to high-level programming languages. Tasks are not so standard: they provide non-local control flow, making it possible to switch between temporarily-suspended computations. This is a powerful construct: both exception handling and cooperative multitasking are implemented in Julia using tasks. Everyday programming requires no direct usage of tasks, but certain problems can be solved much more easily by using tasks.

Compound Expressions

Sometimes it is convenient to have a single expression which evaluates several subexpressions in order, returning the value of the last subexpression as its value. There are two Julia constructs that accomplish this: begin blocks and (;) chains. The value of both compound expression constructs is that of the last subexpression. Here's an example of a begin block:

julia> z = begin
            x = 1
            y = 2
            x + y
diff --git a/en/stable/manual/conversion-and-promotion/index.html b/en/stable/manual/conversion-and-promotion/index.html
index 2e3eb09dbe250..3a2ca5aed77c2 100644
--- a/en/stable/manual/conversion-and-promotion/index.html
+++ b/en/stable/manual/conversion-and-promotion/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Conversion and Promotion

Conversion and Promotion

Julia has a system for promoting arguments of mathematical operators to a common type, which has been mentioned in various other sections, including Integers and Floating-Point Numbers, Mathematical Operations and Elementary Functions, Types, and Methods. In this section, we explain how this promotion system works, as well as how to extend it to new types and apply it to functions besides built-in mathematical operators. Traditionally, programming languages fall into two camps with respect to promotion of arithmetic arguments:

  • Automatic promotion for built-in arithmetic types and operators. In most languages, built-in numeric types, when used as operands to arithmetic operators with infix syntax, such as +, -, *, and /, are automatically promoted to a common type to produce the expected results. C, Java, Perl, and Python, to name a few, all correctly compute the sum 1 + 1.5 as the floating-point value 2.5, even though one of the operands to + is an integer. These systems are convenient and designed carefully enough that they are generally all-but-invisible to the programmer: hardly anyone consciously thinks of this promotion taking place when writing such an expression, but compilers and interpreters must perform conversion before addition since integers and floating-point values cannot be added as-is. Complex rules for such automatic conversions are thus inevitably part of specifications and implementations for such languages.
  • No automatic promotion. This camp includes Ada and ML – very "strict" statically typed languages. In these languages, every conversion must be explicitly specified by the programmer. Thus, the example expression 1 + 1.5 would be a compilation error in both Ada and ML. Instead one must write real(1) + 1.5, explicitly converting the integer 1 to a floating-point value before performing addition. Explicit conversion everywhere is so inconvenient, however, that even Ada has some degree of automatic conversion: integer literals are promoted to the expected integer type automatically, and floating-point literals are similarly promoted to appropriate floating-point types.

In a sense, Julia falls into the "no automatic promotion" category: mathematical operators are just functions with special syntax, and the arguments of functions are never automatically converted. However, one may observe that applying mathematical operations to a wide variety of mixed argument types is just an extreme case of polymorphic multiple dispatch – something which Julia's dispatch and type systems are particularly well-suited to handle. "Automatic" promotion of mathematical operands simply emerges as a special application: Julia comes with pre-defined catch-all dispatch rules for mathematical operators, invoked when no specific implementation exists for some combination of operand types. These catch-all rules first promote all operands to a common type using user-definable promotion rules, and then invoke a specialized implementation of the operator in question for the resulting values, now of the same type. User-defined types can easily participate in this promotion system by defining methods for conversion to and from other types, and providing a handful of promotion rules defining what types they should promote to when mixed with other types.

Conversion

The standard way to obtain a value of a certain type T is to call the type's constructor, T(x). However, there are cases where it's convenient to convert a value from one type to another without the programmer asking for it explicitly. One example is assigning a value into an array: if A is a Vector{Float64}, the expression A[1] = 2 should work by automatically converting the 2 from Int to Float64, and storing the result in the array. This is done via the convert function.

The convert function generally takes two arguments: the first is a type object and the second is a value to convert to that type. The returned value is the value converted to an instance of given type. The simplest way to understand this function is to see it in action:

julia> x = 12
+

Conversion and Promotion

Conversion and Promotion

Julia has a system for promoting arguments of mathematical operators to a common type, which has been mentioned in various other sections, including Integers and Floating-Point Numbers, Mathematical Operations and Elementary Functions, Types, and Methods. In this section, we explain how this promotion system works, as well as how to extend it to new types and apply it to functions besides built-in mathematical operators. Traditionally, programming languages fall into two camps with respect to promotion of arithmetic arguments:

  • Automatic promotion for built-in arithmetic types and operators. In most languages, built-in numeric types, when used as operands to arithmetic operators with infix syntax, such as +, -, *, and /, are automatically promoted to a common type to produce the expected results. C, Java, Perl, and Python, to name a few, all correctly compute the sum 1 + 1.5 as the floating-point value 2.5, even though one of the operands to + is an integer. These systems are convenient and designed carefully enough that they are generally all-but-invisible to the programmer: hardly anyone consciously thinks of this promotion taking place when writing such an expression, but compilers and interpreters must perform conversion before addition since integers and floating-point values cannot be added as-is. Complex rules for such automatic conversions are thus inevitably part of specifications and implementations for such languages.
  • No automatic promotion. This camp includes Ada and ML – very "strict" statically typed languages. In these languages, every conversion must be explicitly specified by the programmer. Thus, the example expression 1 + 1.5 would be a compilation error in both Ada and ML. Instead one must write real(1) + 1.5, explicitly converting the integer 1 to a floating-point value before performing addition. Explicit conversion everywhere is so inconvenient, however, that even Ada has some degree of automatic conversion: integer literals are promoted to the expected integer type automatically, and floating-point literals are similarly promoted to appropriate floating-point types.

In a sense, Julia falls into the "no automatic promotion" category: mathematical operators are just functions with special syntax, and the arguments of functions are never automatically converted. However, one may observe that applying mathematical operations to a wide variety of mixed argument types is just an extreme case of polymorphic multiple dispatch – something which Julia's dispatch and type systems are particularly well-suited to handle. "Automatic" promotion of mathematical operands simply emerges as a special application: Julia comes with pre-defined catch-all dispatch rules for mathematical operators, invoked when no specific implementation exists for some combination of operand types. These catch-all rules first promote all operands to a common type using user-definable promotion rules, and then invoke a specialized implementation of the operator in question for the resulting values, now of the same type. User-defined types can easily participate in this promotion system by defining methods for conversion to and from other types, and providing a handful of promotion rules defining what types they should promote to when mixed with other types.

Conversion

The standard way to obtain a value of a certain type T is to call the type's constructor, T(x). However, there are cases where it's convenient to convert a value from one type to another without the programmer asking for it explicitly. One example is assigning a value into an array: if A is a Vector{Float64}, the expression A[1] = 2 should work by automatically converting the 2 from Int to Float64, and storing the result in the array. This is done via the convert function.

The convert function generally takes two arguments: the first is a type object and the second is a value to convert to that type. The returned value is the value converted to an instance of given type. The simplest way to understand this function is to see it in action:

julia> x = 12
 12
 
 julia> typeof(x)
diff --git a/en/stable/manual/documentation/index.html b/en/stable/manual/documentation/index.html
index f4752580099bc..1b15e780127f8 100644
--- a/en/stable/manual/documentation/index.html
+++ b/en/stable/manual/documentation/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Documentation

Documentation

Julia enables package developers and users to document functions, types and other objects easily via a built-in documentation system since Julia 0.4.

The basic syntax is simple: any string appearing at the top-level right before an object (function, macro, type or instance) will be interpreted as documenting it (these are called docstrings). Note that no blank lines or comments may intervene between a docstring and the documented object. Here is a basic example:

"Tell whether there are too foo items in the array."
+

Documentation

Documentation

Julia enables package developers and users to document functions, types and other objects easily via a built-in documentation system since Julia 0.4.

The basic syntax is simple: any string appearing at the top-level right before an object (function, macro, type or instance) will be interpreted as documenting it (these are called docstrings). Note that no blank lines or comments may intervene between a docstring and the documented object. Here is a basic example:

"Tell whether there are too foo items in the array."
 foo(xs::Array) = ...

Documentation is interpreted as Markdown, so you can use indentation and code fences to delimit code examples from text. Technically, any object can be associated with any other as metadata; Markdown happens to be the default, but one can construct other string macros and pass them to the @doc macro just as well.

Here is a more complex example, still using Markdown:

"""
     bar(x[, y])
 
diff --git a/en/stable/manual/embedding/index.html b/en/stable/manual/embedding/index.html
index afbe2c860533d..126d79450dcfc 100644
--- a/en/stable/manual/embedding/index.html
+++ b/en/stable/manual/embedding/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Embedding Julia

Embedding Julia

As we have seen in Calling C and Fortran Code, Julia has a simple and efficient way to call functions written in C. But there are situations where the opposite is needed: calling Julia function from C code. This can be used to integrate Julia code into a larger C/C++ project, without the need to rewrite everything in C/C++. Julia has a C API to make this possible. As almost all programming languages have some way to call C functions, the Julia C API can also be used to build further language bridges (e.g. calling Julia from Python or C#).

High-Level Embedding

We start with a simple C program that initializes Julia and calls some Julia code:

#include <julia.h>
+

Embedding Julia

Embedding Julia

As we have seen in Calling C and Fortran Code, Julia has a simple and efficient way to call functions written in C. But there are situations where the opposite is needed: calling Julia function from C code. This can be used to integrate Julia code into a larger C/C++ project, without the need to rewrite everything in C/C++. Julia has a C API to make this possible. As almost all programming languages have some way to call C functions, the Julia C API can also be used to build further language bridges (e.g. calling Julia from Python or C#).

High-Level Embedding

We start with a simple C program that initializes Julia and calls some Julia code:

#include <julia.h>
 JULIA_DEFINE_FAST_TLS() // only define this once, in an executable (not in a shared library) if you want fast code.
 
 int main(int argc, char *argv[])
diff --git a/en/stable/manual/environment-variables/index.html b/en/stable/manual/environment-variables/index.html
index 4bc2f2ed06e72..5a87fb7678c73 100644
--- a/en/stable/manual/environment-variables/index.html
+++ b/en/stable/manual/environment-variables/index.html
@@ -6,6 +6,6 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Environment Variables

Environment Variables

Julia may be configured with a number of environment variables, either in the usual way of the operating system, or in a portable way from within Julia. Suppose you want to set the environment variable JULIA_EDITOR to vim, then either type ENV["JULIA_EDITOR"] = "vim" for instance in the REPL to make this change on a case by case basis, or add the same to the user configuration file ~/.julia/config/startup.jl in the user's home directory to have a permanent effect. The current value of the same environment variable is determined by evaluating ENV["JULIA_EDITOR"].

The environment variables that Julia uses generally start with JULIA. If InteractiveUtils.versioninfo is called with verbose equal to true, then the output will list defined environment variables relevant for Julia, including those for which JULIA appears in the name.

File locations

JULIA_BINDIR

The absolute path of the directory containing the Julia executable, which sets the global variable Sys.BINDIR. If $JULIA_BINDIR is not set, then Julia determines the value Sys.BINDIR at run-time.

The executable itself is one of

$JULIA_BINDIR/julia
+

Environment Variables

Environment Variables

Julia may be configured with a number of environment variables, either in the usual way of the operating system, or in a portable way from within Julia. Suppose you want to set the environment variable JULIA_EDITOR to vim, then either type ENV["JULIA_EDITOR"] = "vim" for instance in the REPL to make this change on a case by case basis, or add the same to the user configuration file ~/.julia/config/startup.jl in the user's home directory to have a permanent effect. The current value of the same environment variable is determined by evaluating ENV["JULIA_EDITOR"].

The environment variables that Julia uses generally start with JULIA. If InteractiveUtils.versioninfo is called with verbose equal to true, then the output will list defined environment variables relevant for Julia, including those for which JULIA appears in the name.

File locations

JULIA_BINDIR

The absolute path of the directory containing the Julia executable, which sets the global variable Sys.BINDIR. If $JULIA_BINDIR is not set, then Julia determines the value Sys.BINDIR at run-time.

The executable itself is one of

$JULIA_BINDIR/julia
 $JULIA_BINDIR/julia-debug

by default.

The global variable Base.DATAROOTDIR determines a relative path from Sys.BINDIR to the data directory associated with Julia. Then the path

$JULIA_BINDIR/$DATAROOTDIR/julia/base

determines the directory in which Julia initially searches for source files (via Base.find_source_file()).

Likewise, the global variable Base.SYSCONFDIR determines a relative path to the configuration file directory. Then Julia searches for a startup.jl file at

$JULIA_BINDIR/$SYSCONFDIR/julia/startup.jl
 $JULIA_BINDIR/../etc/julia/startup.jl

by default (via Base.load_julia_startup()).

For example, a Linux installation with a Julia executable located at /bin/julia, a DATAROOTDIR of ../share, and a SYSCONFDIR of ../etc will have JULIA_BINDIR set to /bin, a source-file search path of

/share/julia/base

and a global configuration search path of

/etc/julia/startup.jl

JULIA_LOAD_PATH

A separated list of absolute paths that are to be appended to the variable LOAD_PATH. (In Unix-like systems, the path separator is :; in Windows systems, the path separator is ;.) The LOAD_PATH variable is where Base.require and Base.load_in_path() look for code; it defaults to the absolute path $JULIA_HOME/../share/julia/stdlib/v$(VERSION.major).$(VERSION.minor) so that, e.g., version 0.7 of Julia on a Linux system with a Julia executable at /bin/julia will have a default LOAD_PATH of /share/julia/stdlib/v0.7.

JULIA_HISTORY

The absolute path REPL.find_hist_file() of the REPL's history file. If $JULIA_HISTORY is not set, then REPL.find_hist_file() defaults to

$HOME/.julia/logs/repl_history.jl

JULIA_PKGRESOLVE_ACCURACY

A positive Int that determines how much time the max-sum subroutine MaxSum.maxsum() of the package dependency resolver will devote to attempting satisfying constraints before giving up: this value is by default 1, and larger values correspond to larger amounts of time.

Suppose the value of $JULIA_PKGRESOLVE_ACCURACY is n. Then

  • the number of pre-decimation iterations is 20*n,
  • the number of iterations between decimation steps is 10*n, and
  • at decimation steps, at most one in every 20*n packages is decimated.

External applications

JULIA_SHELL

The absolute path of the shell with which Julia should execute external commands (via Base.repl_cmd()). Defaults to the environment variable $SHELL, and falls back to /bin/sh if $SHELL is unset.

Note

On Windows, this environment variable is ignored, and external commands are executed directly.

JULIA_EDITOR

The editor returned by InteractiveUtils.editor() and used in, e.g., InteractiveUtils.edit, referring to the command of the preferred editor, for instance vim.

$JULIA_EDITOR takes precedence over $VISUAL, which in turn takes precedence over $EDITOR. If none of these environment variables is set, then the editor is taken to be open on Windows and OS X, or /etc/alternatives/editor if it exists, or emacs otherwise.

Parallelization

JULIA_CPU_THREADS

Overrides the global variable Base.Sys.CPU_THREADS, the number of logical CPU cores available.

JULIA_WORKER_TIMEOUT

A Float64 that sets the value of Base.worker_timeout() (default: 60.0). This function gives the number of seconds a worker process will wait for a master process to establish a connection before dying.

JULIA_NUM_THREADS

An unsigned 64-bit integer (uint64_t) that sets the maximum number of threads available to Julia. If $JULIA_NUM_THREADS exceeds the number of available physical CPU cores, then the number of threads is set to the number of cores. If $JULIA_NUM_THREADS is not positive or is not set, or if the number of CPU cores cannot be determined through system calls, then the number of threads is set to 1.

JULIA_THREAD_SLEEP_THRESHOLD

If set to a string that starts with the case-insensitive substring "infinite", then spinning threads never sleep. Otherwise, $JULIA_THREAD_SLEEP_THRESHOLD is interpreted as an unsigned 64-bit integer (uint64_t) and gives, in nanoseconds, the amount of time after which spinning threads should sleep.

JULIA_EXCLUSIVE

If set to anything besides 0, then Julia's thread policy is consistent with running on a dedicated machine: the master thread is on proc 0, and threads are affinitized. Otherwise, Julia lets the operating system handle thread policy.

REPL formatting

Environment variables that determine how REPL output should be formatted at the terminal. Generally, these variables should be set to ANSI terminal escape sequences. Julia provides a high-level interface with much of the same functionality: see the section on The Julia REPL.

JULIA_ERROR_COLOR

The formatting Base.error_color() (default: light red, "\033[91m") that errors should have at the terminal.

JULIA_WARN_COLOR

The formatting Base.warn_color() (default: yellow, "\033[93m") that warnings should have at the terminal.

JULIA_INFO_COLOR

The formatting Base.info_color() (default: cyan, "\033[36m") that info should have at the terminal.

JULIA_INPUT_COLOR

The formatting Base.input_color() (default: normal, "\033[0m") that input should have at the terminal.

JULIA_ANSWER_COLOR

The formatting Base.answer_color() (default: normal, "\033[0m") that output should have at the terminal.

JULIA_STACKFRAME_LINEINFO_COLOR

The formatting Base.stackframe_lineinfo_color() (default: bold, "\033[1m") that line info should have during a stack trace at the terminal.

JULIA_STACKFRAME_FUNCTION_COLOR

The formatting Base.stackframe_function_color() (default: bold, "\033[1m") that function calls should have during a stack trace at the terminal.

Debugging and profiling

JULIA_GC_ALLOC_POOL, JULIA_GC_ALLOC_OTHER, JULIA_GC_ALLOC_PRINT

If set, these environment variables take strings that optionally start with the character 'r', followed by a string interpolation of a colon-separated list of three signed 64-bit integers (int64_t). This triple of integers a:b:c represents the arithmetic sequence a, a + b, a + 2*b, ... c.

  • If it's the nth time that jl_gc_pool_alloc() has been called, and n belongs to the arithmetic sequence represented by $JULIA_GC_ALLOC_POOL, then garbage collection is forced.
  • If it's the nth time that maybe_collect() has been called, and n belongs to the arithmetic sequence represented by $JULIA_GC_ALLOC_OTHER, then garbage collection is forced.
  • If it's the nth time that jl_gc_collect() has been called, and n belongs to the arithmetic sequence represented by $JULIA_GC_ALLOC_PRINT, then counts for the number of calls to jl_gc_pool_alloc() and maybe_collect() are printed.

If the value of the environment variable begins with the character 'r', then the interval between garbage collection events is randomized.

Note

These environment variables only have an effect if Julia was compiled with garbage-collection debugging (that is, if WITH_GC_DEBUG_ENV is set to 1 in the build configuration).

JULIA_GC_NO_GENERATIONAL

If set to anything besides 0, then the Julia garbage collector never performs "quick sweeps" of memory.

Note

This environment variable only has an effect if Julia was compiled with garbage-collection debugging (that is, if WITH_GC_DEBUG_ENV is set to 1 in the build configuration).

JULIA_GC_WAIT_FOR_DEBUGGER

If set to anything besides 0, then the Julia garbage collector will wait for a debugger to attach instead of aborting whenever there's a critical error.

Note

This environment variable only has an effect if Julia was compiled with garbage-collection debugging (that is, if WITH_GC_DEBUG_ENV is set to 1 in the build configuration).

ENABLE_JITPROFILING

If set to anything besides 0, then the compiler will create and register an event listener for just-in-time (JIT) profiling.

Note

This environment variable only has an effect if Julia was compiled with JIT profiling support, using either

  • Intel's VTune™ Amplifier (USE_INTEL_JITEVENTS set to 1 in the build configuration), or
  • OProfile (USE_OPROFILE_JITEVENTS set to 1 in the build configuration).

JULIA_LLVM_ARGS

Arguments to be passed to the LLVM backend.

JULIA_DEBUG_LOADING

If set, then Julia prints detailed information about the cache in the loading process of Base.require.

diff --git a/en/stable/manual/faq/index.html b/en/stable/manual/faq/index.html index a9a2e83d95f29..43f17e5978ce2 100644 --- a/en/stable/manual/faq/index.html +++ b/en/stable/manual/faq/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Frequently Asked Questions

Frequently Asked Questions

Sessions and the REPL

How do I delete an object in memory?

Julia does not have an analog of MATLAB's clear function; once a name is defined in a Julia session (technically, in module Main), it is always present.

If memory usage is your concern, you can always replace objects with ones that consume less memory. For example, if A is a gigabyte-sized array that you no longer need, you can free the memory with A = nothing. The memory will be released the next time the garbage collector runs; you can force this to happen with gc(). Moreover, an attempt to use A will likely result in an error, because most methods are not defined on type Nothing.

How can I modify the declaration of a type in my session?

Perhaps you've defined a type and then realize you need to add a new field. If you try this at the REPL, you get the error:

ERROR: invalid redefinition of constant MyType

Types in module Main cannot be redefined.

While this can be inconvenient when you are developing new code, there's an excellent workaround. Modules can be replaced by redefining them, and so if you wrap all your new code inside a module you can redefine types and constants. You can't import the type names into Main and then expect to be able to redefine them there, but you can use the module name to resolve the scope. In other words, while developing you might use a workflow something like this:

include("mynewcode.jl")              # this defines a module MyModule
+

Frequently Asked Questions

Frequently Asked Questions

Sessions and the REPL

How do I delete an object in memory?

Julia does not have an analog of MATLAB's clear function; once a name is defined in a Julia session (technically, in module Main), it is always present.

If memory usage is your concern, you can always replace objects with ones that consume less memory. For example, if A is a gigabyte-sized array that you no longer need, you can free the memory with A = nothing. The memory will be released the next time the garbage collector runs; you can force this to happen with gc(). Moreover, an attempt to use A will likely result in an error, because most methods are not defined on type Nothing.

How can I modify the declaration of a type in my session?

Perhaps you've defined a type and then realize you need to add a new field. If you try this at the REPL, you get the error:

ERROR: invalid redefinition of constant MyType

Types in module Main cannot be redefined.

While this can be inconvenient when you are developing new code, there's an excellent workaround. Modules can be replaced by redefining them, and so if you wrap all your new code inside a module you can redefine types and constants. You can't import the type names into Main and then expect to be able to redefine them there, but you can use the module name to resolve the scope. In other words, while developing you might use a workflow something like this:

include("mynewcode.jl")              # this defines a module MyModule
 obj1 = MyModule.ObjConstructor(a, b)
 obj2 = MyModule.somefunction(obj1)
 # Got an error. Change something in "mynewcode.jl"
diff --git a/en/stable/manual/functions/index.html b/en/stable/manual/functions/index.html
index 40dbff41a6f59..bcd4af8cdb958 100644
--- a/en/stable/manual/functions/index.html
+++ b/en/stable/manual/functions/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Functions

Functions

In Julia, a function is an object that maps a tuple of argument values to a return value. Julia functions are not pure mathematical functions, in the sense that functions can alter and be affected by the global state of the program. The basic syntax for defining functions in Julia is:

julia> function f(x,y)
+

Functions

Functions

In Julia, a function is an object that maps a tuple of argument values to a return value. Julia functions are not pure mathematical functions, in the sense that functions can alter and be affected by the global state of the program. The basic syntax for defining functions in Julia is:

julia> function f(x,y)
            x + y
        end
 f (generic function with 1 method)

There is a second, more terse syntax for defining a function in Julia. The traditional function declaration syntax demonstrated above is equivalent to the following compact "assignment form":

julia> f(x,y) = x + y
diff --git a/en/stable/manual/getting-started/index.html b/en/stable/manual/getting-started/index.html
index 5b303f7b72160..e1c9dcfb2fbaf 100644
--- a/en/stable/manual/getting-started/index.html
+++ b/en/stable/manual/getting-started/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Getting Started

Getting Started

Julia installation is straightforward, whether using precompiled binaries or compiling from source. Download and install Julia by following the instructions at https://julialang.org/downloads/.

The easiest way to learn and experiment with Julia is by starting an interactive session (also known as a read-eval-print loop or "REPL") by double-clicking the Julia executable or running julia from the command line:

$ julia
+

Getting Started

Getting Started

Julia installation is straightforward, whether using precompiled binaries or compiling from source. Download and install Julia by following the instructions at https://julialang.org/downloads/.

The easiest way to learn and experiment with Julia is by starting an interactive session (also known as a read-eval-print loop or "REPL") by double-clicking the Julia executable or running julia from the command line:

$ julia
 
                _
    _       _ _(_)_     |  Documentation: https://docs.julialang.org
diff --git a/en/stable/manual/handling-operating-system-variation/index.html b/en/stable/manual/handling-operating-system-variation/index.html
index 02525caaefc74..e9a569f018157 100644
--- a/en/stable/manual/handling-operating-system-variation/index.html
+++ b/en/stable/manual/handling-operating-system-variation/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Handling Operating System Variation

Handling Operating System Variation

When dealing with platform libraries, it is often necessary to provide special cases for various platforms. The variable Sys.KERNEL can be used to write these special cases. There are several functions in the Sys module intended to make this easier: isunix, islinux, isapple, isbsd, and iswindows. These may be used as follows:

if Sys.iswindows()
+

Handling Operating System Variation

Handling Operating System Variation

When dealing with platform libraries, it is often necessary to provide special cases for various platforms. The variable Sys.KERNEL can be used to write these special cases. There are several functions in the Sys module intended to make this easier: isunix, islinux, isapple, isbsd, and iswindows. These may be used as follows:

if Sys.iswindows()
     some_complicated_thing(a)
 end

Note that islinux and isapple are mutually exclusive subsets of isunix. Additionally, there is a macro @static which makes it possible to use these functions to conditionally hide invalid code, as demonstrated in the following examples.

Simple blocks:

ccall((@static Sys.iswindows() ? :_fopen : :fopen), ...)

Complex blocks:

@static if Sys.islinux()
     some_complicated_thing(a)
diff --git a/en/stable/manual/integers-and-floating-point-numbers/index.html b/en/stable/manual/integers-and-floating-point-numbers/index.html
index 9bff7c71d0640..f8e60f9606c15 100644
--- a/en/stable/manual/integers-and-floating-point-numbers/index.html
+++ b/en/stable/manual/integers-and-floating-point-numbers/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Integers and Floating-Point Numbers

Integers and Floating-Point Numbers

Integers and floating-point values are the basic building blocks of arithmetic and computation. Built-in representations of such values are called numeric primitives, while representations of integers and floating-point numbers as immediate values in code are known as numeric literals. For example, 1 is an integer literal, while 1.0 is a floating-point literal; their binary in-memory representations as objects are numeric primitives.

Julia provides a broad range of primitive numeric types, and a full complement of arithmetic and bitwise operators as well as standard mathematical functions are defined over them. These map directly onto numeric types and operations that are natively supported on modern computers, thus allowing Julia to take full advantage of computational resources. Additionally, Julia provides software support for Arbitrary Precision Arithmetic, which can handle operations on numeric values that cannot be represented effectively in native hardware representations, but at the cost of relatively slower performance.

The following are Julia's primitive numeric types:

  • Integer types:
TypeSigned?Number of bitsSmallest valueLargest value
Int88-2^72^7 - 1
UInt8802^8 - 1
Int1616-2^152^15 - 1
UInt161602^16 - 1
Int3232-2^312^31 - 1
UInt323202^32 - 1
Int6464-2^632^63 - 1
UInt646402^64 - 1
Int128128-2^1272^127 - 1
UInt12812802^128 - 1
BoolN/A8false (0)true (1)
  • Floating-point types:
TypePrecisionNumber of bits
Float16half16
Float32single32
Float64double64

Additionally, full support for Complex and Rational Numbers is built on top of these primitive numeric types. All numeric types interoperate naturally without explicit casting, thanks to a flexible, user-extensible type promotion system.

Integers

Literal integers are represented in the standard manner:

julia> 1
+

Integers and Floating-Point Numbers

Integers and Floating-Point Numbers

Integers and floating-point values are the basic building blocks of arithmetic and computation. Built-in representations of such values are called numeric primitives, while representations of integers and floating-point numbers as immediate values in code are known as numeric literals. For example, 1 is an integer literal, while 1.0 is a floating-point literal; their binary in-memory representations as objects are numeric primitives.

Julia provides a broad range of primitive numeric types, and a full complement of arithmetic and bitwise operators as well as standard mathematical functions are defined over them. These map directly onto numeric types and operations that are natively supported on modern computers, thus allowing Julia to take full advantage of computational resources. Additionally, Julia provides software support for Arbitrary Precision Arithmetic, which can handle operations on numeric values that cannot be represented effectively in native hardware representations, but at the cost of relatively slower performance.

The following are Julia's primitive numeric types:

  • Integer types:
TypeSigned?Number of bitsSmallest valueLargest value
Int88-2^72^7 - 1
UInt8802^8 - 1
Int1616-2^152^15 - 1
UInt161602^16 - 1
Int3232-2^312^31 - 1
UInt323202^32 - 1
Int6464-2^632^63 - 1
UInt646402^64 - 1
Int128128-2^1272^127 - 1
UInt12812802^128 - 1
BoolN/A8false (0)true (1)
  • Floating-point types:
TypePrecisionNumber of bits
Float16half16
Float32single32
Float64double64

Additionally, full support for Complex and Rational Numbers is built on top of these primitive numeric types. All numeric types interoperate naturally without explicit casting, thanks to a flexible, user-extensible type promotion system.

Integers

Literal integers are represented in the standard manner:

julia> 1
 1
 
 julia> 1234
diff --git a/en/stable/manual/interfaces/index.html b/en/stable/manual/interfaces/index.html
index e413069a0052b..9881459e0387e 100644
--- a/en/stable/manual/interfaces/index.html
+++ b/en/stable/manual/interfaces/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Interfaces

Interfaces

A lot of the power and extensibility in Julia comes from a collection of informal interfaces. By extending a few specific methods to work for a custom type, objects of that type not only receive those functionalities, but they are also able to be used in other methods that are written to generically build upon those behaviors.

Iteration

Required methodsBrief description
iterate(iter)Returns either a tuple of the first item and initial state or nothing if empty
iterate(iter, state)Returns either a tuple of the next item and next state or nothing if no items remain
Important optional methodsDefault definitionBrief description
IteratorSize(IterType)HasLength()One of HasLength(), HasShape{N}(), IsInfinite(), or SizeUnknown() as appropriate
IteratorEltype(IterType)HasEltype()Either EltypeUnknown() or HasEltype() as appropriate
eltype(IterType)AnyThe type of the first entry of the tuple returned by iterate()
length(iter)(undefined)The number of items, if known
size(iter, [dim...])(undefined)The number of items in each dimension, if known
Value returned by IteratorSize(IterType)Required Methods
HasLength()length(iter)
HasShape{N}()length(iter) and size(iter, [dim...])
IsInfinite()(none)
SizeUnknown()(none)
Value returned by IteratorEltype(IterType)Required Methods
HasEltype()eltype(IterType)
EltypeUnknown()(none)

Sequential iteration is implemented by the iterate function. Instead of mutating objects as they are iterated over, Julia iterators may keep track of the iteration state externally from the object. The return value from iterate is always either a tuple of a value and a state, or nothing if no elements remain. The state object will be passed back to the iterate function on the next iteration and is generally considered an implementation detail private to the iterable object.

Any object that defines this function is iterable and can be used in the many functions that rely upon iteration. It can also be used directly in a for loop since the syntax:

for i in iter   # or  "for i = iter"
+

Interfaces

Interfaces

A lot of the power and extensibility in Julia comes from a collection of informal interfaces. By extending a few specific methods to work for a custom type, objects of that type not only receive those functionalities, but they are also able to be used in other methods that are written to generically build upon those behaviors.

Iteration

Required methodsBrief description
iterate(iter)Returns either a tuple of the first item and initial state or nothing if empty
iterate(iter, state)Returns either a tuple of the next item and next state or nothing if no items remain
Important optional methodsDefault definitionBrief description
IteratorSize(IterType)HasLength()One of HasLength(), HasShape{N}(), IsInfinite(), or SizeUnknown() as appropriate
IteratorEltype(IterType)HasEltype()Either EltypeUnknown() or HasEltype() as appropriate
eltype(IterType)AnyThe type of the first entry of the tuple returned by iterate()
length(iter)(undefined)The number of items, if known
size(iter, [dim...])(undefined)The number of items in each dimension, if known
Value returned by IteratorSize(IterType)Required Methods
HasLength()length(iter)
HasShape{N}()length(iter) and size(iter, [dim...])
IsInfinite()(none)
SizeUnknown()(none)
Value returned by IteratorEltype(IterType)Required Methods
HasEltype()eltype(IterType)
EltypeUnknown()(none)

Sequential iteration is implemented by the iterate function. Instead of mutating objects as they are iterated over, Julia iterators may keep track of the iteration state externally from the object. The return value from iterate is always either a tuple of a value and a state, or nothing if no elements remain. The state object will be passed back to the iterate function on the next iteration and is generally considered an implementation detail private to the iterable object.

Any object that defines this function is iterable and can be used in the many functions that rely upon iteration. It can also be used directly in a for loop since the syntax:

for i in iter   # or  "for i = iter"
     # body
 end

is translated into:

next = iterate(iter)
 while next !== nothing
diff --git a/en/stable/manual/mathematical-operations/index.html b/en/stable/manual/mathematical-operations/index.html
index 8a62da0f42c96..dc5e8e2804779 100644
--- a/en/stable/manual/mathematical-operations/index.html
+++ b/en/stable/manual/mathematical-operations/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Mathematical Operations and Elementary Functions

Mathematical Operations and Elementary Functions

Julia provides a complete collection of basic arithmetic and bitwise operators across all of its numeric primitive types, as well as providing portable, efficient implementations of a comprehensive collection of standard mathematical functions.

Arithmetic Operators

The following arithmetic operators are supported on all primitive numeric types:

ExpressionNameDescription
+xunary plusthe identity operation
-xunary minusmaps values to their additive inverses
x + ybinary plusperforms addition
x - ybinary minusperforms subtraction
x * ytimesperforms multiplication
x / ydivideperforms division
x ÷ yinteger dividex / y, truncated to an integer
x \ yinverse divideequivalent to y / x
x ^ ypowerraises x to the yth power
x % yremainderequivalent to rem(x,y)

as well as the negation on Bool types:

ExpressionNameDescription
!xnegationchanges true to false and vice versa

Julia's promotion system makes arithmetic operations on mixtures of argument types "just work" naturally and automatically. See Conversion and Promotion for details of the promotion system.

Here are some simple examples using arithmetic operators:

julia> 1 + 2 + 3
+

Mathematical Operations and Elementary Functions

Mathematical Operations and Elementary Functions

Julia provides a complete collection of basic arithmetic and bitwise operators across all of its numeric primitive types, as well as providing portable, efficient implementations of a comprehensive collection of standard mathematical functions.

Arithmetic Operators

The following arithmetic operators are supported on all primitive numeric types:

ExpressionNameDescription
+xunary plusthe identity operation
-xunary minusmaps values to their additive inverses
x + ybinary plusperforms addition
x - ybinary minusperforms subtraction
x * ytimesperforms multiplication
x / ydivideperforms division
x ÷ yinteger dividex / y, truncated to an integer
x \ yinverse divideequivalent to y / x
x ^ ypowerraises x to the yth power
x % yremainderequivalent to rem(x,y)

as well as the negation on Bool types:

ExpressionNameDescription
!xnegationchanges true to false and vice versa

Julia's promotion system makes arithmetic operations on mixtures of argument types "just work" naturally and automatically. See Conversion and Promotion for details of the promotion system.

Here are some simple examples using arithmetic operators:

julia> 1 + 2 + 3
 6
 
 julia> 1 - 2
diff --git a/en/stable/manual/metaprogramming/index.html b/en/stable/manual/metaprogramming/index.html
index dc11fa8b8163a..abc0048c93d91 100644
--- a/en/stable/manual/metaprogramming/index.html
+++ b/en/stable/manual/metaprogramming/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Metaprogramming

Metaprogramming

The strongest legacy of Lisp in the Julia language is its metaprogramming support. Like Lisp, Julia represents its own code as a data structure of the language itself. Since code is represented by objects that can be created and manipulated from within the language, it is possible for a program to transform and generate its own code. This allows sophisticated code generation without extra build steps, and also allows true Lisp-style macros operating at the level of abstract syntax trees. In contrast, preprocessor "macro" systems, like that of C and C++, perform textual manipulation and substitution before any actual parsing or interpretation occurs. Because all data types and code in Julia are represented by Julia data structures, powerful reflection capabilities are available to explore the internals of a program and its types just like any other data.

Program representation

Every Julia program starts life as a string:

julia> prog = "1 + 1"
+

Metaprogramming

Metaprogramming

The strongest legacy of Lisp in the Julia language is its metaprogramming support. Like Lisp, Julia represents its own code as a data structure of the language itself. Since code is represented by objects that can be created and manipulated from within the language, it is possible for a program to transform and generate its own code. This allows sophisticated code generation without extra build steps, and also allows true Lisp-style macros operating at the level of abstract syntax trees. In contrast, preprocessor "macro" systems, like that of C and C++, perform textual manipulation and substitution before any actual parsing or interpretation occurs. Because all data types and code in Julia are represented by Julia data structures, powerful reflection capabilities are available to explore the internals of a program and its types just like any other data.

Program representation

Every Julia program starts life as a string:

julia> prog = "1 + 1"
 "1 + 1"

What happens next?

The next step is to parse each string into an object called an expression, represented by the Julia type Expr:

julia> ex1 = Meta.parse(prog)
 :(1 + 1)
 
diff --git a/en/stable/manual/methods/index.html b/en/stable/manual/methods/index.html
index fd9ecbaff2025..1c779721481eb 100644
--- a/en/stable/manual/methods/index.html
+++ b/en/stable/manual/methods/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Methods

Methods

Recall from Functions that a function is an object that maps a tuple of arguments to a return value, or throws an exception if no appropriate value can be returned. It is common for the same conceptual function or operation to be implemented quite differently for different types of arguments: adding two integers is very different from adding two floating-point numbers, both of which are distinct from adding an integer to a floating-point number. Despite their implementation differences, these operations all fall under the general concept of "addition". Accordingly, in Julia, these behaviors all belong to a single object: the + function.

To facilitate using many different implementations of the same concept smoothly, functions need not be defined all at once, but can rather be defined piecewise by providing specific behaviors for certain combinations of argument types and counts. A definition of one possible behavior for a function is called a method. Thus far, we have presented only examples of functions defined with a single method, applicable to all types of arguments. However, the signatures of method definitions can be annotated to indicate the types of arguments in addition to their number, and more than a single method definition may be provided. When a function is applied to a particular tuple of arguments, the most specific method applicable to those arguments is applied. Thus, the overall behavior of a function is a patchwork of the behaviors of its various method definitions. If the patchwork is well designed, even though the implementations of the methods may be quite different, the outward behavior of the function will appear seamless and consistent.

The choice of which method to execute when a function is applied is called dispatch. Julia allows the dispatch process to choose which of a function's methods to call based on the number of arguments given, and on the types of all of the function's arguments. This is different than traditional object-oriented languages, where dispatch occurs based only on the first argument, which often has a special argument syntax, and is sometimes implied rather than explicitly written as an argument. [1] Using all of a function's arguments to choose which method should be invoked, rather than just the first, is known as multiple dispatch. Multiple dispatch is particularly useful for mathematical code, where it makes little sense to artificially deem the operations to "belong" to one argument more than any of the others: does the addition operation in x + y belong to x any more than it does to y? The implementation of a mathematical operator generally depends on the types of all of its arguments. Even beyond mathematical operations, however, multiple dispatch ends up being a powerful and convenient paradigm for structuring and organizing programs.

[1]

In C++ or Java, for example, in a method call like obj.meth(arg1,arg2), the object obj "receives" the method call and is implicitly passed to the method via the this keyword, rather than as an explicit method argument. When the current this object is the receiver of a method call, it can be omitted altogether, writing just meth(arg1,arg2), with this implied as the receiving object.

Defining Methods

Until now, we have, in our examples, defined only functions with a single method having unconstrained argument types. Such functions behave just like they would in traditional dynamically typed languages. Nevertheless, we have used multiple dispatch and methods almost continually without being aware of it: all of Julia's standard functions and operators, like the aforementioned + function, have many methods defining their behavior over various possible combinations of argument type and count.

When defining a function, one can optionally constrain the types of parameters it is applicable to, using the :: type-assertion operator, introduced in the section on Composite Types:

julia> f(x::Float64, y::Float64) = 2x + y
+

Methods

Methods

Recall from Functions that a function is an object that maps a tuple of arguments to a return value, or throws an exception if no appropriate value can be returned. It is common for the same conceptual function or operation to be implemented quite differently for different types of arguments: adding two integers is very different from adding two floating-point numbers, both of which are distinct from adding an integer to a floating-point number. Despite their implementation differences, these operations all fall under the general concept of "addition". Accordingly, in Julia, these behaviors all belong to a single object: the + function.

To facilitate using many different implementations of the same concept smoothly, functions need not be defined all at once, but can rather be defined piecewise by providing specific behaviors for certain combinations of argument types and counts. A definition of one possible behavior for a function is called a method. Thus far, we have presented only examples of functions defined with a single method, applicable to all types of arguments. However, the signatures of method definitions can be annotated to indicate the types of arguments in addition to their number, and more than a single method definition may be provided. When a function is applied to a particular tuple of arguments, the most specific method applicable to those arguments is applied. Thus, the overall behavior of a function is a patchwork of the behaviors of its various method definitions. If the patchwork is well designed, even though the implementations of the methods may be quite different, the outward behavior of the function will appear seamless and consistent.

The choice of which method to execute when a function is applied is called dispatch. Julia allows the dispatch process to choose which of a function's methods to call based on the number of arguments given, and on the types of all of the function's arguments. This is different than traditional object-oriented languages, where dispatch occurs based only on the first argument, which often has a special argument syntax, and is sometimes implied rather than explicitly written as an argument. [1] Using all of a function's arguments to choose which method should be invoked, rather than just the first, is known as multiple dispatch. Multiple dispatch is particularly useful for mathematical code, where it makes little sense to artificially deem the operations to "belong" to one argument more than any of the others: does the addition operation in x + y belong to x any more than it does to y? The implementation of a mathematical operator generally depends on the types of all of its arguments. Even beyond mathematical operations, however, multiple dispatch ends up being a powerful and convenient paradigm for structuring and organizing programs.

[1]

In C++ or Java, for example, in a method call like obj.meth(arg1,arg2), the object obj "receives" the method call and is implicitly passed to the method via the this keyword, rather than as an explicit method argument. When the current this object is the receiver of a method call, it can be omitted altogether, writing just meth(arg1,arg2), with this implied as the receiving object.

Defining Methods

Until now, we have, in our examples, defined only functions with a single method having unconstrained argument types. Such functions behave just like they would in traditional dynamically typed languages. Nevertheless, we have used multiple dispatch and methods almost continually without being aware of it: all of Julia's standard functions and operators, like the aforementioned + function, have many methods defining their behavior over various possible combinations of argument type and count.

When defining a function, one can optionally constrain the types of parameters it is applicable to, using the :: type-assertion operator, introduced in the section on Composite Types:

julia> f(x::Float64, y::Float64) = 2x + y
 f (generic function with 1 method)

This function definition applies only to calls where x and y are both values of type Float64:

julia> f(2.0, 3.0)
 7.0

Applying it to any other types of arguments will result in a MethodError:

julia> f(2.0, 3)
 ERROR: MethodError: no method matching f(::Float64, ::Int64)
diff --git a/en/stable/manual/missing/index.html b/en/stable/manual/missing/index.html
index 5b8bdc1bd846a..5f23f8e08f179 100644
--- a/en/stable/manual/missing/index.html
+++ b/en/stable/manual/missing/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Missing Values

Missing Values

Julia provides support for representing missing values in the statistical sense, that is for situations where no value is available for a variable in an observation, but a valid value theoretically exists. Missing values are represented via the missing object, which is the singleton instance of the type Missing. missing is equivalent to NULL in SQL and NA in R, and behaves like them in most situations.

Propagation of Missing Values

The behavior of missing values follows one basic rule: missing values propagate automatically when passed to standard operators and functions, in particular mathematical functions. Uncertainty about the value of one of the operands induces uncertainty about the result. In practice, this means an operation involving a missing value generally returns missing

julia> missing + 1
+

Missing Values

Missing Values

Julia provides support for representing missing values in the statistical sense, that is for situations where no value is available for a variable in an observation, but a valid value theoretically exists. Missing values are represented via the missing object, which is the singleton instance of the type Missing. missing is equivalent to NULL in SQL and NA in R, and behaves like them in most situations.

Propagation of Missing Values

The behavior of missing values follows one basic rule: missing values propagate automatically when passed to standard operators and functions, in particular mathematical functions. Uncertainty about the value of one of the operands induces uncertainty about the result. In practice, this means an operation involving a missing value generally returns missing

julia> missing + 1
 missing
 
 julia> "a" * missing
diff --git a/en/stable/manual/modules/index.html b/en/stable/manual/modules/index.html
index ff9d1d8d07152..57e8bf2383aa3 100644
--- a/en/stable/manual/modules/index.html
+++ b/en/stable/manual/modules/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Modules

Modules

Modules in Julia are separate variable workspaces, i.e. they introduce a new global scope. They are delimited syntactically, inside module Name ... end. Modules allow you to create top-level definitions (aka global variables) without worrying about name conflicts when your code is used together with somebody else's. Within a module, you can control which names from other modules are visible (via importing), and specify which of your names are intended to be public (via exporting).

The following example demonstrates the major features of modules. It is not meant to be run, but is shown for illustrative purposes:

module MyModule
+

Modules

Modules

Modules in Julia are separate variable workspaces, i.e. they introduce a new global scope. They are delimited syntactically, inside module Name ... end. Modules allow you to create top-level definitions (aka global variables) without worrying about name conflicts when your code is used together with somebody else's. Within a module, you can control which names from other modules are visible (via importing), and specify which of your names are intended to be public (via exporting).

The following example demonstrates the major features of modules. It is not meant to be run, but is shown for illustrative purposes:

module MyModule
 using Lib
 
 using BigLib: thing1, thing2
diff --git a/en/stable/manual/networking-and-streams/index.html b/en/stable/manual/networking-and-streams/index.html
index f37763fbf2f14..edd075f63469c 100644
--- a/en/stable/manual/networking-and-streams/index.html
+++ b/en/stable/manual/networking-and-streams/index.html
@@ -6,7 +6,7 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Networking and Streams

Networking and Streams

Julia provides a rich interface to deal with streaming I/O objects such as terminals, pipes and TCP sockets. This interface, though asynchronous at the system level, is presented in a synchronous manner to the programmer and it is usually unnecessary to think about the underlying asynchronous operation. This is achieved by making heavy use of Julia cooperative threading (coroutine) functionality.

Basic Stream I/O

All Julia streams expose at least a read and a write method, taking the stream as their first argument, e.g.:

julia> write(stdout, "Hello World");  # suppress return value 11 with ;
+

Networking and Streams

Networking and Streams

Julia provides a rich interface to deal with streaming I/O objects such as terminals, pipes and TCP sockets. This interface, though asynchronous at the system level, is presented in a synchronous manner to the programmer and it is usually unnecessary to think about the underlying asynchronous operation. This is achieved by making heavy use of Julia cooperative threading (coroutine) functionality.

Basic Stream I/O

All Julia streams expose at least a read and a write method, taking the stream as their first argument, e.g.:

julia> write(stdout, "Hello World");  # suppress return value 11 with ;
 Hello World
 julia> read(stdin, Char)
 
diff --git a/en/stable/manual/noteworthy-differences/index.html b/en/stable/manual/noteworthy-differences/index.html
index 971966fe0e081..fba7a0a30e693 100644
--- a/en/stable/manual/noteworthy-differences/index.html
+++ b/en/stable/manual/noteworthy-differences/index.html
@@ -6,4 +6,4 @@
 
 ga('create', 'UA-28835595-6', 'auto');
 ga('send', 'pageview');
-

Noteworthy Differences from other Languages

Noteworthy Differences from other Languages

Noteworthy differences from MATLAB

Although MATLAB users may find Julia's syntax familiar, Julia is not a MATLAB clone. There are major syntactic and functional differences. The following are some noteworthy differences that may trip up Julia users accustomed to MATLAB:

  • Julia arrays are indexed with square brackets, A[i,j].
  • Julia arrays are not copied when assigned to another variable. After A = B, changing elements of B will modify A as well.
  • Julia values are not copied when passed to a function. If a function modifies an array, the changes will be visible in the caller.
  • Julia does not automatically grow arrays in an assignment statement. Whereas in MATLAB a(4) = 3.2 can create the array a = [0 0 0 3.2] and a(5) = 7 can grow it into a = [0 0 0 3.2 7], the corresponding Julia statement a[5] = 7 throws an error if the length of a is less than 5 or if this statement is the first use of the identifier a. Julia has push! and append!, which grow Vectors much more efficiently than MATLAB's a(end+1) = val.
  • The imaginary unit sqrt(-1) is represented in Julia as im, not i or j as in MATLAB.
  • In Julia, literal numbers without a decimal point (such as 42) create integers instead of floating point numbers. Arbitrarily large integer literals are supported. As a result, some operations such as 2^-1 will throw a domain error as the result is not an integer (see the FAQ entry on domain errors for details).
  • In Julia, multiple values are returned and assigned as tuples, e.g. (a, b) = (1, 2) or a, b = 1, 2. MATLAB's nargout, which is often used in MATLAB to do optional work based on the number of returned values, does not exist in Julia. Instead, users can use optional and keyword arguments to achieve similar capabilities.
  • Julia has true one-dimensional arrays. Column vectors are of size N, not Nx1. For example, rand(N) makes a 1-dimensional array.
  • In Julia, [x,y,z] will always construct a 3-element array containing x, y and z.
    • To concatenate in the first ("vertical") dimension use either vcat(x,y,z) or separate with semicolons ([x; y; z]).
    • To concatenate in the second ("horizontal") dimension use either hcat(x,y,z) or separate with spaces ([x y z]).
    • To construct block matrices (concatenating in the first two dimensions), use either hvcat or combine spaces and semicolons ([a b; c d]).
  • In Julia, a:b and a:b:c construct AbstractRange objects. To construct a full vector like in MATLAB, use collect(a:b). Generally, there is no need to call collect though. An AbstractRange object will act like a normal array in most cases but is more efficient because it lazily computes its values. This pattern of creating specialized objects instead of full arrays is used frequently, and is also seen in functions such as range, or with iterators such as enumerate, and zip. The special objects can mostly be used as if they were normal arrays.
  • Functions in Julia return values from their last expression or the return keyword instead of listing the names of variables to return in the function definition (see The return Keyword for details).
  • A Julia script may contain any number of functions, and all definitions will be externally visible when the file is loaded. Function definitions can be loaded from files outside the current working directory.
  • In Julia, reductions such as sum, prod, and max are performed over every element of an array when called with a single argument, as in sum(A), even if A has more than one dimension.
  • In Julia, parentheses must be used to call a function with zero arguments, like in rand().
  • Julia discourages the used of semicolons to end statements. The results of statements are not automatically printed (except at the interactive prompt), and lines of code do not need to end with semicolons. println or @printf can be used to print specific output.
  • In Julia, if A and B are arrays, logical comparison operations like A == B do not return an array of booleans. Instead, use A .== B, and similarly for the other boolean operators like <, > and =.
  • In Julia, the operators &, |, and (xor) perform the bitwise operations equivalent to and, or, and xor respectively in MATLAB, and have precedence similar to Python's bitwise operators (unlike C). They can operate on scalars or element-wise across arrays and can be used to combine logical arrays, but note the difference in order of operations: parentheses may be required (e.g., to select elements of A equal to 1 or 2 use (A .== 1) .| (A .== 2)).
  • In Julia, the elements of a collection can be passed as arguments to a function using the splat operator ..., as in xs=[1,2]; f(xs...).
  • Julia's svd returns singular values as a vector instead of as a dense diagonal matrix.
  • In Julia, ... is not used to continue lines of code. Instead, incomplete expressions automatically continue onto the next line.
  • In both Julia and MATLAB, the variable ans is set to the value of the last expression issued in an interactive session. In Julia, unlike MATLAB, ans is not set when Julia code is run in non-interactive mode.
  • Julia's structs do not support dynamically adding fields at runtime, unlike MATLAB's classes. Instead, use a Dict.
  • In Julia each module has its own global scope/namespace, whereas in MATLAB there is just one global scope.
  • In MATLAB, an idiomatic way to remove unwanted values is to use logical indexing, like in the expression x(x>3) or in the statement x(x>3) = [] to modify x in-place. In contrast, Julia provides the higher order functions filter and filter!, allowing users to write filter(z->z>3, x) and filter!(z->z>3, x) as alternatives to the corresponding transliterations x[x.>3] and x = x[x.>3]. Using filter! reduces the use of temporary arrays.
  • The analogue of extracting (or "dereferencing") all elements of a cell array, e.g. in vertcat(A{:}) in MATLAB, is written using the splat operator in Julia, e.g. as vcat(A...).

Noteworthy differences from R

One of Julia's goals is to provide an effective language for data analysis and statistical programming. For users coming to Julia from R, these are some noteworthy differences:

  • Julia's single quotes enclose characters, not strings.

  • Julia can create substrings by indexing into strings. In R, strings must be converted into character vectors before creating substrings.

  • In Julia, like Python but unlike R, strings can be created with triple quotes """ ... """. This syntax is convenient for constructing strings that contain line breaks.

  • In Julia, varargs are specified using the splat operator ..., which always follows the name of a specific variable, unlike R, for which ... can occur in isolation.

  • In Julia, modulus is mod(a, b), not a %% b. % in Julia is the remainder operator.

  • In Julia, not all data structures support logical indexing. Furthermore, logical indexing in Julia is supported only with vectors of length equal to the object being indexed. For example:

    • In R, c(1, 2, 3, 4)[c(TRUE, FALSE)] is equivalent to c(1, 3).
    • In R, c(1, 2, 3, 4)[c(TRUE, FALSE, TRUE, FALSE)] is equivalent to c(1, 3).
    • In Julia, [1, 2, 3, 4][[true, false]] throws a BoundsError.
    • In Julia, [1, 2, 3, 4][[true, false, true, false]] produces [1, 3].
  • Like many languages, Julia does not always allow operations on vectors of different lengths, unlike R where the vectors only need to share a common index range. For example, c(1, 2, 3, 4) + c(1, 2) is valid R but the equivalent [1, 2, 3, 4] + [1, 2] will throw an error in Julia.

  • Julia allows an optional trailing comma when that comma does not change the meaning of code. This can cause confusion among R users when indexing into arrays. For example, x[1,] in R would return the first row of a matrix; in Julia, however, the comma is ignored, so x[1,] == x[1], and will return the first element. To extract a row, be sure to use :, as in x[1,:].

  • Julia's map takes the function first, then its arguments, unlike lapply(<structure>, function, ...) in R. Similarly Julia's equivalent of apply(X, MARGIN, FUN, ...) in R is mapslices where the function is the first argument.

  • Multivariate apply in R, e.g. mapply(choose, 11:13, 1:3), can be written as broadcast(binomial, 11:13, 1:3) in Julia. Equivalently Julia offers a shorter dot syntax for vectorizing functions binomial.(11:13, 1:3).

  • Julia uses end to denote the end of conditional blocks, like if, loop blocks, like while/ for, and functions. In lieu of the one-line if ( cond ) statement, Julia allows statements of the form if cond; statement; end, cond && statement and !cond || statement. Assignment statements in the latter two syntaxes must be explicitly wrapped in parentheses, e.g. cond && (x = value).

  • In Julia, <-, <<- and -> are not assignment operators.

  • Julia's -> creates an anonymous function.

  • Julia constructs vectors using brackets. Julia's [1, 2, 3] is the equivalent of R's c(1, 2, 3).

  • Julia's * operator can perform matrix multiplication, unlike in R. If A and B are matrices, then A * B denotes a matrix multiplication in Julia, equivalent to R's A %*% B. In R, this same notation would perform an element-wise (Hadamard) product. To get the element-wise multiplication operation, you need to write A .* B in Julia.

  • Julia performs matrix transposition using the transpose function and conjugated transposition using the ' operator or the adjoint function. Julia's transpose(A) is therefore equivalent to R's t(A). Additionally a non-recursive transpose in Julia is provided by the permutedims function.

  • Julia does not require parentheses when writing if statements or for/while loops: use for i in [1, 2, 3] instead of for (i in c(1, 2, 3)) and if i == 1 instead of if (i == 1).

  • Julia does not treat the numbers 0 and 1 as Booleans. You cannot write if (1) in Julia, because if statements accept only booleans. Instead, you can write if true, if Bool(1), or if 1==1.

  • Julia does not provide nrow and ncol. Instead, use size(M, 1) for nrow(M) and size(M, 2) for ncol(M).

  • Julia is careful to distinguish scalars, vectors and matrices. In R, 1 and c(1) are the same. In Julia, they cannot be used interchangeably.

  • Julia's diag and diagm are not like R's.

  • Julia cannot assign to the results of function calls on the left hand side of an assignment operation: you cannot write diag(M) = fill(1, n).

  • Julia discourages populating the main namespace with functions. Most statistical functionality for Julia is found in packages under the JuliaStats organization. For example:

  • Julia provides tuples and real hash tables, but not R-style lists. When returning multiple items, you should typically use a tuple or a named tuple: instead of list(a = 1, b = 2), use (1, 2) or (a=1, b=2).

  • Julia encourages users to write their own types, which are easier to use than S3 or S4 objects in R. Julia's multiple dispatch system means that table(x::TypeA) and table(x::TypeB) act like R's table.TypeA(x) and table.TypeB(x).

  • In Julia, values are not copied when assigned or passed to a function. If a function modifies an array, the changes will be visible in the caller. This is very different from R and allows new functions to operate on large data structures much more efficiently.

  • In Julia, vectors and matrices are concatenated using hcat, vcat and hvcat, not c, rbind and cbind like in R.

  • In Julia, a range like a:b is not shorthand for a vector like in R, but is a specialized AbstractRange object that is used for iteration without high memory overhead. To convert a range into a vector, use collect(a:b).

  • Julia's max and min are the equivalent of pmax and pmin respectively in R, but both arguments need to have the same dimensions. While maximum and minimum replace max and min in R, there are important differences.

  • Julia's sum, prod, maximum, and minimum are different from their counterparts in R. They all accept one or two arguments. The first argument is an iterable collection such as an array. If there is a second argument, then this argument indicates the dimensions, over which the operation is carried out. For instance, let A = [1 2; 3 4] in Julia and B <- rbind(c(1,2),c(3,4)) be the same matrix in R. Then sum(A) gives the same result as sum(B), but sum(A, dims=1) is a row vector containing the sum over each column and sum(A, dims=2) is a column vector containing the sum over each row. This contrasts to the behavior of R, where separate colSums(B) and rowSums(B) functions provide these functionalities. If the dims keyword argument is a vector, then it specifies all the dimensions over which the sum is performed, while retaining the dimensions of the summed array, e.g. sum(A, dims=(1,2)) == hcat(10). It should be noted that there is no error checking regarding the second argument.

  • Julia has several functions that can mutate their arguments. For example, it has both sort and sort!.

  • In R, performance requires vectorization. In Julia, almost the opposite is true: the best performing code is often achieved by using devectorized loops.

  • Julia is eagerly evaluated and does not support R-style lazy evaluation. For most users, this means that there are very few unquoted expressions or column names.

  • Julia does not support the NULL type. The closest equivalent is nothing, but it behaves like a scalar value rather than like a list. Use x == nothing instead of is.null(x).

  • In Julia, missing values are represented by the missing object rather than by NA. Use ismissing(x) instead of isna(x). The skipmissing function is generally used instead of na.rm=TRUE (though in some particular cases functions take a skipmissing argument).

  • Julia lacks the equivalent of R's assign or get.

  • In Julia, return does not require parentheses.

  • In R, an idiomatic way to remove unwanted values is to use logical indexing, like in the expression x[x>3] or in the statement x = x[x>3] to modify x in-place. In contrast, Julia provides the higher order functions filter and filter!, allowing users to write filter(z->z>3, x) and filter!(z->z>3, x) as alternatives to the corresponding transliterations x[x.>3] and x = x[x.>3]. Using filter! reduces the use of temporary arrays.

Noteworthy differences from Python

  • Julia requires end to end a block. Unlike Python, Julia has no pass keyword.
  • In Julia, indexing of arrays, strings, etc. is 1-based not 0-based.
  • Julia's slice indexing includes the last element, unlike in Python. a[2:3] in Julia is a[1:3] in Python.
  • Julia does not support negative indices. In particular, the last element of a list or array is indexed with end in Julia, not -1 as in Python.
  • Julia's for, if, while, etc. blocks are terminated by the end keyword. Indentation level is not significant as it is in Python.
  • Julia has no line continuation syntax: if, at the end of a line, the input so far is a complete expression, it is considered done; otherwise the input continues. One way to force an expression to continue is to wrap it in parentheses.
  • Julia arrays are column major (Fortran ordered) whereas NumPy arrays are row major (C-ordered) by default. To get optimal performance when looping over arrays, the order of the loops should be reversed in Julia relative to NumPy (see relevant section of Performance Tips).
  • Julia's updating operators (e.g. +=, -=, ...) are not in-place whereas NumPy's are. This means A = [1, 1]; B = A; B += [3, 3] doesn't change values in A, it rather rebinds the name B to the result of the right-hand side B = B + 3, which is a new array. For in-place operation, use B .+= 3 (see also dot operators), explicit loops, or InplaceOps.jl.
  • Julia evaluates default values of function arguments every time the method is invoked, unlike in Python where the default values are evaluated only once when the function is defined. For example, the function f(x=rand()) = x returns a new random number every time it is invoked without argument. On the other hand, the function g(x=[1,2]) = push!(x,3) returns [1,2,3] every time it is called as g().
  • In Julia % is the remainder operator, whereas in Python it is the modulus.

Noteworthy differences from C/C++

  • Julia arrays are indexed with square brackets, and can have more than one dimension A[i,j]. This syntax is not just syntactic sugar for a reference to a pointer or address as in C/C++. See the Julia documentation for the syntax for array construction (it has changed between versions).
  • In Julia, indexing of arrays, strings, etc. is 1-based not 0-based.
  • Julia arrays are not copied when assigned to another variable. After A = B, changing elements of B will modify A as well. Updating operators like += do not operate in-place, they are equivalent to A = A + B which rebinds the left-hand side to the result of the right-hand side expression.
  • Julia arrays are column major (Fortran ordered) whereas C/C++ arrays are row major ordered by default. To get optimal performance when looping over arrays, the order of the loops should be reversed in Julia relative to C/C++ (see relevant section of Performance Tips).
  • Julia values are not copied when assigned or passed to a function. If a function modifies an array, the changes will be visible in the caller.
  • In Julia, whitespace is significant, unlike C/C++, so care must be taken when adding/removing whitespace from a Julia program.
  • In Julia, literal numbers without a decimal point (such as 42) create signed integers, of type Int, but literals too large to fit in the machine word size will automatically be promoted to a larger size type, such as Int64 (if Int is Int32), Int128, or the arbitrarily large BigInt type. There are no numeric literal suffixes, such as L, LL, U, UL, ULL to indicate unsigned and/or signed vs. unsigned. Decimal literals are always signed, and hexadecimal literals (which start with 0x like C/C++), are unsigned. Hexadecimal literals also, unlike C/C++/Java and unlike decimal literals in Julia, have a type based on the length of the literal, including leading 0s. For example, 0x0 and 0x00 have type UInt8, 0x000 and 0x0000 have type UInt16, then literals with 5 to 8 hex digits have type UInt32, 9 to 16 hex digits type UInt64 and 17 to 32 hex digits type UInt128. This needs to be taken into account when defining hexadecimal masks, for example ~0xf == 0xf0 is very different from ~0x000f == 0xfff0. 64 bit Float64 and 32 bit Float32 bit literals are expressed as 1.0 and 1.0f0 respectively. Floating point literals are rounded (and not promoted to the BigFloat type) if they can not be exactly represented. Floating point literals are closer in behavior to C/C++. Octal (prefixed with 0o) and binary (prefixed with 0b) literals are also treated as unsigned.
  • String literals can be delimited with either " or """, """ delimited literals can contain " characters without quoting it like "\"" String literals can have values of other variables or expressions interpolated into them, indicated by $variablename or $(expression), which evaluates the variable name or the expression in the context of the function.
  • // indicates a Rational number, and not a single-line comment (which is # in Julia)
  • #= indicates the start of a multiline comment, and =# ends it.
  • Functions in Julia return values from their last expression(s) or the return keyword. Multiple values can be returned from functions and assigned as tuples, e.g. (a, b) = myfunction() or a, b = myfunction(), instead of having to pass pointers to values as one would have to do in C/C++ (i.e. a = myfunction(&b).
  • Julia does not require the use of semicolons to end statements. The results of expressions are not automatically printed (except at the interactive prompt, i.e. the REPL), and lines of code do not need to end with semicolons. println or @printf can be used to print specific output. In the REPL, ; can be used to suppress output. ; also has a different meaning within [ ], something to watch out for. ; can be used to separate expressions on a single line, but are not strictly necessary in many cases, and are more an aid to readability.
  • In Julia, the operator (xor) performs the bitwise XOR operation, i.e. ^ in C/C++. Also, the bitwise operators do not have the same precedence as C/++, so parenthesis may be required.
  • Julia's ^ is exponentiation (pow), not bitwise XOR as in C/C++ (use , or xor, in Julia)
  • Julia has two right-shift operators, >> and >>>. >>> performs an arithmetic shift, >> always performs a logical shift, unlike C/C++, where the meaning of >> depends on the type of the value being shifted.
  • Julia's -> creates an anonymous function, it does not access a member via a pointer.
  • Julia does not require parentheses when writing if statements or for/while loops: use for i in [1, 2, 3] instead of for (int i=1; i <= 3; i++) and if i == 1 instead of if (i == 1).
  • Julia does not treat the numbers 0 and 1 as Booleans. You cannot write if (1) in Julia, because if statements accept only booleans. Instead, you can write if true, if Bool(1), or if 1==1.
  • Julia uses end to denote the end of conditional blocks, like if, loop blocks, like while/ for, and functions. In lieu of the one-line if ( cond ) statement, Julia allows statements of the form if cond; statement; end, cond && statement and !cond || statement. Assignment statements in the latter two syntaxes must be explicitly wrapped in parentheses, e.g. cond && (x = value), because of the operator precedence.
  • Julia has no line continuation syntax: if, at the end of a line, the input so far is a complete expression, it is considered done; otherwise the input continues. One way to force an expression to continue is to wrap it in parentheses.
  • Julia macros operate on parsed expressions, rather than the text of the program, which allows them to perform sophisticated transformations of Julia code. Macro names start with the @ character, and have both a function-like syntax, @mymacro(arg1, arg2, arg3), and a statement-like syntax, @mymacro arg1 arg2 arg3. The forms are interchangeable; the function-like form is particularly useful if the macro appears within another expression, and is often clearest. The statement-like form is often used to annotate blocks, as in the distributed for construct: @distributed for i in 1:n; #= body =#; end. Where the end of the macro construct may be unclear, use the function-like form.
  • Julia now has an enumeration type, expressed using the macro @enum(name, value1, value2, ...) For example: @enum(Fruit, banana=1, apple, pear)
  • By convention, functions that modify their arguments have a ! at the end of the name, for example push!.
  • In C++, by default, you have static dispatch, i.e. you need to annotate a function as virtual, in order to have dynamic dispatch. On the other hand, in Julia every method is "virtual" (although it's more general than that since methods are dispatched on every argument type, not only this, using the most-specific-declaration rule).
+

Noteworthy Differences from other Languages

Noteworthy Differences from other Languages

Noteworthy differences from MATLAB

Although MATLAB users may find Julia's syntax familiar, Julia is not a MATLAB clone. There are major syntactic and functional differences. The following are some noteworthy differences that may trip up Julia users accustomed to MATLAB:

  • Julia arrays are indexed with square brackets, A[i,j].
  • Julia arrays are not copied when assigned to another variable. After A = B, changing elements of B will modify A as well.
  • Julia values are not copied when passed to a function. If a function modifies an array, the changes will be visible in the caller.
  • Julia does not automatically grow arrays in an assignment statement. Whereas in MATLAB a(4) = 3.2 can create the array a = [0 0 0 3.2] and a(5) = 7 can grow it into a = [0 0 0 3.2 7], the corresponding Julia statement a[5] = 7 throws an error if the length of a is less than 5 or if this statement is the first use of the identifier a. Julia has push! and append!, which grow Vectors much more efficiently than MATLAB's a(end+1) = val.
  • The imaginary unit sqrt(-1) is represented in Julia as im, not i or j as in MATLAB.
  • In Julia, literal numbers without a decimal point (such as 42) create integers instead of floating point numbers. Arbitrarily large integer literals are supported. As a result, some operations such as 2^-1 will throw a domain error as the result is not an integer (see the FAQ entry on domain errors for details).
  • In Julia, multiple values are returned and assigned as tuples, e.g. (a, b) = (1, 2) or a, b = 1, 2. MATLAB's nargout, which is often used in MATLAB to do optional work based on the number of returned values, does not exist in Julia. Instead, users can use optional and keyword arguments to achieve similar capabilities.
  • Julia has true one-dimensional arrays. Column vectors are of size N, not Nx1. For example, rand(N) makes a 1-dimensional array.
  • In Julia, [x,y,z] will always construct a 3-element array containing x, y and z.
    • To concatenate in the first ("vertical") dimension use either vcat(x,y,z) or separate with semicolons ([x; y; z]).
    • To concatenate in the second ("horizontal") dimension use either hcat(x,y,z) or separate with spaces ([x y z]).
    • To construct block matrices (concatenating in the first two dimensions), use either hvcat or combine spaces and semicolons ([a b; c d]).
  • In Julia, a:b and a:b:c construct AbstractRange objects. To construct a full vector like in MATLAB, use collect(a:b). Generally, there is no need to call collect though. An AbstractRange object will act like a normal array in most cases but is more efficient because it lazily computes its values. This pattern of creating specialized objects instead of full arrays is used frequently, and is also seen in functions such as range, or with iterators such as enumerate, and zip. The special objects can mostly be used as if they were normal arrays.
  • Functions in Julia return values from their last expression or the return keyword instead of listing the names of variables to return in the function definition (see The return Keyword for details).
  • A Julia script may contain any number of functions, and all definitions will be externally visible when the file is loaded. Function definitions can be loaded from files outside the current working directory.
  • In Julia, reductions such as sum, prod, and max are performed over every element of an array when called with a single argument, as in sum(A), even if A has more than one dimension.
  • In Julia, parentheses must be used to call a function with zero arguments, like in rand().
  • Julia discourages the used of semicolons to end statements. The results of statements are not automatically printed (except at the interactive prompt), and lines of code do not need to end with semicolons. println or @printf can be used to print specific output.
  • In Julia, if A and B are arrays, logical comparison operations like A == B do not return an array of booleans. Instead, use A .== B, and similarly for the other boolean operators like <, > and =.
  • In Julia, the operators &, |, and (xor) perform the bitwise operations equivalent to and, or, and xor respectively in MATLAB, and have precedence similar to Python's bitwise operators (unlike C). They can operate on scalars or element-wise across arrays and can be used to combine logical arrays, but note the difference in order of operations: parentheses may be required (e.g., to select elements of A equal to 1 or 2 use (A .== 1) .| (A .== 2)).
  • In Julia, the elements of a collection can be passed as arguments to a function using the splat operator ..., as in xs=[1,2]; f(xs...).
  • Julia's svd returns singular values as a vector instead of as a dense diagonal matrix.
  • In Julia, ... is not used to continue lines of code. Instead, incomplete expressions automatically continue onto the next line.
  • In both Julia and MATLAB, the variable ans is set to the value of the last expression issued in an interactive session. In Julia, unlike MATLAB, ans is not set when Julia code is run in non-interactive mode.
  • Julia's structs do not support dynamically adding fields at runtime, unlike MATLAB's classes. Instead, use a Dict.
  • In Julia each module has its own global scope/namespace, whereas in MATLAB there is just one global scope.
  • In MATLAB, an idiomatic way to remove unwanted values is to use logical indexing, like in the expression x(x>3) or in the statement x(x>3) = [] to modify x in-place. In contrast, Julia provides the higher order functions filter and filter!, allowing users to write filter(z->z>3, x) and filter!(z->z>3, x) as alternatives to the corresponding transliterations x[x.>3] and x = x[x.>3]. Using filter! reduces the use of temporary arrays.
  • The analogue of extracting (or "dereferencing") all elements of a cell array, e.g. in vertcat(A{:}) in MATLAB, is written using the splat operator in Julia, e.g. as vcat(A...).

Noteworthy differences from R

One of Julia's goals is to provide an effective language for data analysis and statistical programming. For users coming to Julia from R, these are some noteworthy differences:

  • Julia's single quotes enclose characters, not strings.

  • Julia can create substrings by indexing into strings. In R, strings must be converted into character vectors before creating substrings.

  • In Julia, like Python but unlike R, strings can be created with triple quotes """ ... """. This syntax is convenient for constructing strings that contain line breaks.

  • In Julia, varargs are specified using the splat operator ..., which always follows the name of a specific variable, unlike R, for which ... can occur in isolation.

  • In Julia, modulus is mod(a, b), not a %% b. % in Julia is the remainder operator.

  • In Julia, not all data structures support logical indexing. Furthermore, logical indexing in Julia is supported only with vectors of length equal to the object being indexed. For example:

    • In R, c(1, 2, 3, 4)[c(TRUE, FALSE)] is equivalent to c(1, 3).
    • In R, c(1, 2, 3, 4)[c(TRUE, FALSE, TRUE, FALSE)] is equivalent to c(1, 3).
    • In Julia, [1, 2, 3, 4][[true, false]] throws a BoundsError.
    • In Julia, [1, 2, 3, 4][[true, false, true, false]] produces [1, 3].
  • Like many languages, Julia does not always allow operations on vectors of different lengths, unlike R where the vectors only need to share a common index range. For example, c(1, 2, 3, 4) + c(1, 2) is valid R but the equivalent [1, 2, 3, 4] + [1, 2] will throw an error in Julia.

  • Julia allows an optional trailing comma when that comma does not change the meaning of code. This can cause confusion among R users when indexing into arrays. For example, x[1,] in R would return the first row of a matrix; in Julia, however, the comma is ignored, so x[1,] == x[1], and will return the first element. To extract a row, be sure to use :, as in x[1,:].

  • Julia's map takes the function first, then its arguments, unlike lapply(<structure>, function, ...) in R. Similarly Julia's equivalent of apply(X, MARGIN, FUN, ...) in R is mapslices where the function is the first argument.

  • Multivariate apply in R, e.g. mapply(choose, 11:13, 1:3), can be written as broadcast(binomial, 11:13, 1:3) in Julia. Equivalently Julia offers a shorter dot syntax for vectorizing functions binomial.(11:13, 1:3).

  • Julia uses end to denote the end of conditional blocks, like if, loop blocks, like while/ for, and functions. In lieu of the one-line if ( cond ) statement, Julia allows statements of the form if cond; statement; end, cond && statement and !cond || statement. Assignment statements in the latter two syntaxes must be explicitly wrapped in parentheses, e.g. cond && (x = value).

  • In Julia, <-, <<- and -> are not assignment operators.

  • Julia's -> creates an anonymous function.

  • Julia constructs vectors using brackets. Julia's [1, 2, 3] is the equivalent of R's c(1, 2, 3).

  • Julia's * operator can perform matrix multiplication, unlike in R. If A and B are matrices, then A * B denotes a matrix multiplication in Julia, equivalent to R's A %*% B. In R, this same notation would perform an element-wise (Hadamard) product. To get the element-wise multiplication operation, you need to write A .* B in Julia.

  • Julia performs matrix transposition using the transpose function and conjugated transposition using the ' operator or the adjoint function. Julia's transpose(A) is therefore equivalent to R's t(A). Additionally a non-recursive transpose in Julia is provided by the permutedims function.

  • Julia does not require parentheses when writing if statements or for/while loops: use for i in [1, 2, 3] instead of for (i in c(1, 2, 3)) and if i == 1 instead of if (i == 1).

  • Julia does not treat the numbers 0 and 1 as Booleans. You cannot write if (1) in Julia, because if statements accept only booleans. Instead, you can write if true, if Bool(1), or if 1==1.

  • Julia does not provide nrow and ncol. Instead, use size(M, 1) for nrow(M) and size(M, 2) for ncol(M).

  • Julia is careful to distinguish scalars, vectors and matrices. In R, 1 and c(1) are the same. In Julia, they cannot be used interchangeably.

  • Julia's diag and diagm are not like R's.

  • Julia cannot assign to the results of function calls on the left hand side of an assignment operation: you cannot write diag(M) = fill(1, n).

  • Julia discourages populating the main namespace with functions. Most statistical functionality for Julia is found in packages under the JuliaStats organization. For example:

  • Julia provides tuples and real hash tables, but not R-style lists. When returning multiple items, you should typically use a tuple or a named tuple: instead of list(a = 1, b = 2), use (1, 2) or (a=1, b=2).

  • Julia encourages users to write their own types, which are easier to use than S3 or S4 objects in R. Julia's multiple dispatch system means that table(x::TypeA) and table(x::TypeB) act like R's table.TypeA(x) and table.TypeB(x).

  • In Julia, values are not copied when assigned or passed to a function. If a function modifies an array, the changes will be visible in the caller. This is very different from R and allows new functions to operate on large data structures much more efficiently.

  • In Julia, vectors and matrices are concatenated using hcat, vcat and hvcat, not c, rbind and cbind like in R.

  • In Julia, a range like a:b is not shorthand for a vector like in R, but is a specialized AbstractRange object that is used for iteration without high memory overhead. To convert a range into a vector, use collect(a:b).

  • Julia's max and min are the equivalent of pmax and pmin respectively in R, but both arguments need to have the same dimensions. While maximum and minimum replace max and min in R, there are important differences.

  • Julia's sum, prod, maximum, and minimum are different from their counterparts in R. They all accept one or two arguments. The first argument is an iterable collection such as an array. If there is a second argument, then this argument indicates the dimensions, over which the operation is carried out. For instance, let A = [1 2; 3 4] in Julia and B <- rbind(c(1,2),c(3,4)) be the same matrix in R. Then sum(A) gives the same result as sum(B), but sum(A, dims=1) is a row vector containing the sum over each column and sum(A, dims=2) is a column vector containing the sum over each row. This contrasts to the behavior of R, where separate colSums(B) and rowSums(B) functions provide these functionalities. If the dims keyword argument is a vector, then it specifies all the dimensions over which the sum is performed, while retaining the dimensions of the summed array, e.g. sum(A, dims=(1,2)) == hcat(10). It should be noted that there is no error checking regarding the second argument.

  • Julia has several functions that can mutate their arguments. For example, it has both sort and sort!.

  • In R, performance requires vectorization. In Julia, almost the opposite is true: the best performing code is often achieved by using devectorized loops.

  • Julia is eagerly evaluated and does not support R-style lazy evaluation. For most users, this means that there are very few unquoted expressions or column names.

  • Julia does not support the NULL type. The closest equivalent is nothing, but it behaves like a scalar value rather than like a list. Use x == nothing instead of is.null(x).

  • In Julia, missing values are represented by the missing object rather than by NA. Use ismissing(x) instead of isna(x). The skipmissing function is generally used instead of na.rm=TRUE (though in some particular cases functions take a skipmissing argument).

  • Julia lacks the equivalent of R's assign or get.

  • In Julia, return does not require parentheses.

  • In R, an idiomatic way to remove unwanted values is to use logical indexing, like in the expression x[x>3] or in the statement x = x[x>3] to modify x in-place. In contrast, Julia provides the higher order functions filter and filter!, allowing users to write filter(z->z>3, x) and filter!(z->z>3, x) as alternatives to the corresponding transliterations x[x.>3] and x = x[x.>3]. Using filter! reduces the use of temporary arrays.

Noteworthy differences from Python

  • Julia requires end to end a block. Unlike Python, Julia has no pass keyword.
  • In Julia, indexing of arrays, strings, etc. is 1-based not 0-based.
  • Julia's slice indexing includes the last element, unlike in Python. a[2:3] in Julia is a[1:3] in Python.
  • Julia does not support negative indices. In particular, the last element of a list or array is indexed with end in Julia, not -1 as in Python.
  • Julia's for, if, while, etc. blocks are terminated by the end keyword. Indentation level is not significant as it is in Python.
  • Julia has no line continuation syntax: if, at the end of a line, the input so far is a complete expression, it is considered done; otherwise the input continues. One way to force an expression to continue is to wrap it in parentheses.
  • Julia arrays are column major (Fortran ordered) whereas NumPy arrays are row major (C-ordered) by default. To get optimal performance when looping over arrays, the order of the loops should be reversed in Julia relative to NumPy (see relevant section of Performance Tips).
  • Julia's updating operators (e.g. +=, -=, ...) are not in-place whereas NumPy's are. This means A = [1, 1]; B = A; B += [3, 3] doesn't change values in A, it rather rebinds the name B to the result of the right-hand side B = B + 3, which is a new array. For in-place operation, use B .+= 3 (see also dot operators), explicit loops, or InplaceOps.jl.
  • Julia evaluates default values of function arguments every time the method is invoked, unlike in Python where the default values are evaluated only once when the function is defined. For example, the function f(x=rand()) = x returns a new random number every time it is invoked without argument. On the other hand, the function g(x=[1,2]) = push!(x,3) returns [1,2,3] every time it is called as g().
  • In Julia % is the remainder operator, whereas in Python it is the modulus.

Noteworthy differences from C/C++

  • Julia arrays are indexed with square brackets, and can have more than one dimension A[i,j]. This syntax is not just syntactic sugar for a reference to a pointer or address as in C/C++. See the Julia documentation for the syntax for array construction (it has changed between versions).
  • In Julia, indexing of arrays, strings, etc. is 1-based not 0-based.
  • Julia arrays are not copied when assigned to another variable. After A = B, changing elements of B will modify A as well. Updating operators like += do not operate in-place, they are equivalent to A = A + B which rebinds the left-hand side to the result of the right-hand side expression.
  • Julia arrays are column major (Fortran ordered) whereas C/C++ arrays are row major ordered by default. To get optimal performance when looping over arrays, the order of the loops should be reversed in Julia relative to C/C++ (see relevant section of Performance Tips).
  • Julia values are not copied when assigned or passed to a function. If a function modifies an array, the changes will be visible in the caller.
  • In Julia, whitespace is significant, unlike C/C++, so care must be taken when adding/removing whitespace from a Julia program.
  • In Julia, literal numbers without a decimal point (such as 42) create signed integers, of type Int, but literals too large to fit in the machine word size will automatically be promoted to a larger size type, such as Int64 (if Int is Int32), Int128, or the arbitrarily large BigInt type. There are no numeric literal suffixes, such as L, LL, U, UL, ULL to indicate unsigned and/or signed vs. unsigned. Decimal literals are always signed, and hexadecimal literals (which start with 0x like C/C++), are unsigned. Hexadecimal literals also, unlike C/C++/Java and unlike decimal literals in Julia, have a type based on the length of the literal, including leading 0s. For example, 0x0 and 0x00 have type UInt8, 0x000 and 0x0000 have type UInt16, then literals with 5 to 8 hex digits have type UInt32, 9 to 16 hex digits type UInt64 and 17 to 32 hex digits type UInt128. This needs to be taken into account when defining hexadecimal masks, for example ~0xf == 0xf0 is very different from ~0x000f == 0xfff0. 64 bit Float64 and 32 bit Float32 bit literals are expressed as 1.0 and 1.0f0 respectively. Floating point literals are rounded (and not promoted to the BigFloat type) if they can not be exactly represented. Floating point literals are closer in behavior to C/C++. Octal (prefixed with 0o) and binary (prefixed with 0b) literals are also treated as unsigned.
  • String literals can be delimited with either " or """, """ delimited literals can contain " characters without quoting it like "\"" String literals can have values of other variables or expressions interpolated into them, indicated by $variablename or $(expression), which evaluates the variable name or the expression in the context of the function.
  • // indicates a Rational number, and not a single-line comment (which is # in Julia)
  • #= indicates the start of a multiline comment, and =# ends it.
  • Functions in Julia return values from their last expression(s) or the return keyword. Multiple values can be returned from functions and assigned as tuples, e.g. (a, b) = myfunction() or a, b = myfunction(), instead of having to pass pointers to values as one would have to do in C/C++ (i.e. a = myfunction(&b).
  • Julia does not require the use of semicolons to end statements. The results of expressions are not automatically printed (except at the interactive prompt, i.e. the REPL), and lines of code do not need to end with semicolons. println or @printf can be used to print specific output. In the REPL, ; can be used to suppress output. ; also has a different meaning within [ ], something to watch out for. ; can be used to separate expressions on a single line, but are not strictly necessary in many cases, and are more an aid to readability.
  • In Julia, the operator (xor) performs the bitwise XOR operation, i.e. ^ in C/C++. Also, the bitwise operators do not have the same precedence as C/++, so parenthesis may be required.
  • Julia's ^ is exponentiation (pow), not bitwise XOR as in C/C++ (use , or xor, in Julia)
  • Julia has two right-shift operators, >> and >>>. >>> performs an arithmetic shift, >> always performs a logical shift, unlike C/C++, where the meaning of >> depends on the type of the value being shifted.
  • Julia's -> creates an anonymous function, it does not access a member via a pointer.
  • Julia does not require parentheses when writing if statements or for/while loops: use for i in [1, 2, 3] instead of for (int i=1; i <= 3; i++) and if i == 1 instead of if (i == 1).
  • Julia does not treat the numbers 0 and 1 as Booleans. You cannot write if (1) in Julia, because if statements accept only booleans. Instead, you can write if true, if Bool(1), or if 1==1.
  • Julia uses end to denote the end of conditional blocks, like if, loop blocks, like while/ for, and functions. In lieu of the one-line if ( cond ) statement, Julia allows statements of the form if cond; statement; end, cond && statement and !cond || statement. Assignment statements in the latter two syntaxes must be explicitly wrapped in parentheses, e.g. cond && (x = value), because of the operator precedence.
  • Julia has no line continuation syntax: if, at the end of a line, the input so far is a complete expression, it is considered done; otherwise the input continues. One way to force an expression to continue is to wrap it in parentheses.
  • Julia macros operate on parsed expressions, rather than the text of the program, which allows them to perform sophisticated transformations of Julia code. Macro names start with the @ character, and have both a function-like syntax, @mymacro(arg1, arg2, arg3), and a statement-like syntax, @mymacro arg1 arg2 arg3. The forms are interchangeable; the function-like form is particularly useful if the macro appears within another expression, and is often clearest. The statement-like form is often used to annotate blocks, as in the distributed for construct: @distributed for i in 1:n; #= body =#; end. Where the end of the macro construct may be unclear, use the function-like form.
  • Julia now has an enumeration type, expressed using the macro @enum(name, value1, value2, ...) For example: @enum(Fruit, banana=1, apple, pear)
  • By convention, functions that modify their arguments have a ! at the end of the name, for example push!.
  • In C++, by default, you have static dispatch, i.e. you need to annotate a function as virtual, in order to have dynamic dispatch. On the other hand, in Julia every method is "virtual" (although it's more general than that since methods are dispatched on every argument type, not only this, using the most-specific-declaration rule).
diff --git a/en/stable/manual/parallel-computing/index.html b/en/stable/manual/parallel-computing/index.html index 6d5abf000ece5..809a94563643b 100644 --- a/en/stable/manual/parallel-computing/index.html +++ b/en/stable/manual/parallel-computing/index.html @@ -6,7 +6,7 @@ ga('create', 'UA-28835595-6', 'auto'); ga('send', 'pageview'); -

Parallel Computing

Parallel Computing

For newcomers to multi-threading and parallel computing it can be useful to first appreciate the different levels of parallelism offered by Julia. We can divide them in three main categories :

  1. Julia Coroutines (Green Threading)
  2. Multi-Threading
  3. Multi-Core or Distributed Processing

We will first consider Julia Tasks (aka Coroutines) and other modules that rely on the Julia runtime library, that allow to suspend and resume computations with full control of inter-Tasks communication without having to manually interface with the operative system's scheduler. Julia also allows to communicate between Tasks through operations like wait and fetch. Communication and data synchronization is managed through Channels, which are the conduit that allows inter-Tasks communication.

Julia also supports experimental multi-threading, where execution is forked and an anonymous function is run across all threads. Described as a fork-join approach, parallel threads are branched off and they all have to join the Julia main thread to make serial execution continue. Multi-threading is supported using the Base.Threads module that is still considered experimental, as Julia is not fully thread-safe yet. In particular segfaults seem to emerge for I\O operations and task switching. As an un up-to-date reference, keep an eye on the issue tracker. Multi-Threading should only be used if you take into consideration global variables, locks and atomics, so we will explain it later.

In the end we will present Julia's way to distributed and parallel computing. With scientific computing in mind, Julia natively implements interfaces to distribute a process through multiple cores or machines. Also we will mention useful external packages for distributed programming like MPI.jl and DistributedArrays.jl.

Coroutines

Julia's parallel programming platform uses Tasks (aka Coroutines) to switch among multiple computations. To express an order of execution between lightweight threads communication primitives are necessary. Julia offers Channel(func::Function, ctype=Any, csize=0, taskref=nothing) that creates a new task from func, binds it to a new channel of type ctype and size csize and schedule the task. Channels can serve as a way to communicate between tasks, as Channel{T}(sz::Int) creates a buffered channel of type T and size sz. Whenever code performs a communication operation like fetch or wait, the current task is suspended and a scheduler picks another task to run. A task is restarted when the event it is waiting for completes.

For many problems, it is not necessary to think about tasks directly. However, they can be used to wait for multiple events at the same time, which provides for dynamic scheduling. In dynamic scheduling, a program decides what to compute or where to compute it based on when other jobs finish. This is needed for unpredictable or unbalanced workloads, where we want to assign more work to processes only when they finish their current tasks.

Channels

The section on Tasks in Control Flow discussed the execution of multiple functions in a co-operative manner. Channels can be quite useful to pass data between running tasks, particularly those involving I/O operations.

Examples of operations involving I/O include reading/writing to files, accessing web services, executing external programs, etc. In all these cases, overall execution time can be improved if other tasks can be run while a file is being read, or while waiting for an external service/program to complete.

A channel can be visualized as a pipe, i.e., it has a write end and a read end :