Releases: Bareflank/hypervisor
v3.0.0
I am proud to announce the release of Bareflank v3.0.0. It has been a long journey to get to this point, but we are finally here. Version v3.0.0 includes the following changes:
Microkernel
Previous versions of Bareflank used a monolithic approach, requiring the user to use Bareflank specific APIs to register callbacks to extend the functionality of the Bareflank hypervisor to create your own, customer hypervisors. This was similar to writing a device driver for the Linux kernel. The problem with this approach was that like the Linux kernel, APIs change and you have to code in the same language and build system required by the Linux kernel. As AIS made changes to the APIs to support it's own internal needs, this dramatically effected the rest of the community, often times breaking downstream projects. In addition, most hypervisor developers are unfamiliar with C++, which was required in previous versions (at least to some degree).
The microkernel design breaks the hypervisor up into a ring 0, kernel component and a ring 3 extension, both of which execute in VMX root (while everything else, including the host OS run in VMX non-root). Bareflank provides the kernel, while the user implements the actual VMM in the ring 3, userspace component we call an extension. The Bareflank then provides the Microkernel ABI Specification which documents a fully versioned, syscall ABI that extensions can use to communicate with the kernel to perform privileged operations. Not only does this provide better security for product focused use cases, it also provides a well defined division between the upstream logic, and your downstream logic with a simple and most importantly, stable ABI. The Bareflank project is committed to ensure changes to the ABI are backwards compatible, similar to how Windows and Linux make modifications to their syscall interfaces. If any, extension specific logic changes in the project, downstream users can safely use their existing extension specific logic with an updated kernel without breaking changes. In addition, extension authors can now, safely write their extensions in any language and use any build system they prefer. We even provide a default example using Rust instead of C++. That's right, Bareflank supports the development of a VMM in Rust. You can use whatever build system you want, and whatever language you want. If you stick to CMake, you can use integrate with Bareflank's existing build system for a seamless experience, but it is not required. Simply tell vmmctl which kernel and which extension you want to use, no matter how they were built, and that's it.
Native Windows Support
The previous version of Bareflank used libc++ and newlib with a custom unwinder. Although this provided support for a number of C++ features, it required the use of cygwin to compile on Windows due to newlib's use of a Linux specific build system. Bareflank now uses the BSL, which provides an exception free, dynamic memory free, critical system's compliant implementation of a small subset of the C++ library (that also supports Rust). This removed every external dependency the project had, meaning all of Bareflank can be compiled natively on Windows. The only Bareflank specific feature that is not supported on Windows is generating code coverage reports (as this still uses LCOV). Everything else uses LLVM, ninja and bash that comes with Git for Windows. This dramatically simplifies development on Windows and greatly improves the overall developer experience.
AMD support (and soon ARMv8)
Bareflank now comes with support for AMD. In fact, all of Bareflank v3.0 was developed on AMD and then ported to Intel, ensuring development for AMD was taken seriously. The loader also has support for ARMv8. Future support for ARMv8 is coming soon so stay tuned. Previous versions of Bareflank had too many dependencies that made ARM almost impossible including newlib and our custom unwinder. With the use of the BSL, ARM support is simply a matter of finding the time to complete this feature. If you are interested in helping to complete ARM support faster, please let us know.
Critical Systems Compliance
Bareflank is now compliant with AUTOSAR and MISRA including full unit and integration testing with MC/DC testing. Actually, Bareflank is developed without the use of any Boolean operators and a strick if/else policy enforced by Clang Tidy and our CI, ensuring that simple line coverage also satisfies all MC/DC paths. As a result, AIS is developing MicroV, which is a critical systems compliant, Type 1, cross platform, edge computing focused hypervisor that implements KVM's API, but with strong isolation and a small TCB capable of supporting your government and critical system needs.
pre-v3.0
v2.1 (archive)
This release is intended to archive the Bareflank project prior to the release of the new microkernel approach. For more information on the upcoming changes, please see the following:
Bareflank Hypervisor (2.0)
The Bareflank Team is proud to announce version rc2.0.4, with the following new features:
New Build System
Round 3 for our build system, is now based on CMake. With our new build system, we no longer have a dependency on "bash" to compile our code (our dependencies like binutils and newlib still have this dependency). Build times are significantly improved, the source is far easier to read / modify, and we now have the ability to easily support additional architectures like ARM, which is currently being developed. Fingers crossed, this should be the last time we completely re-write the build system. For information on how to use this build system, please see the instructions on the main README.md and our example config
Reorganization
Most of the code has been reorganized and greatly simplified to make it easier to follow the source code, but also provide better support for projects like the hyperkernel without the need for as much duplication of code. To see the source code for the actual hypervisor, please see https://github.com/Bareflank/hypervisor/tree/master/bfvmm. The remaining source code provides support logic like the C runtime, Unwinder (for exception support), Intrinsics for Intel and ARM, the Bareflank Manager for starting / stopping the hypervisor from userspace (optional), the ELF loader, and SDK for various different headers that simplify development.
Delegates
We have moved a lot of the APIs from using inheritance, to using delegates. Inheritance is still used in some places, but switching to delegates has increased performance, reduced memory usage, and greatly simplified our APIs. In addition, the Extended APIs has a more comprehensive set of APIs that are easier to use as a result.
UEFI Support
UEFI support is being added, and will be completed for v2.0. With UEFI support, will provide better Type 1 support, allowing users to start Bareflank from UEFI, and then start your desired operating system including Windows and Linux.
Memory Management
Better memory management will be completed for v2.0. The new memory manager will be modeled after the SLAB / Buddy allocators in Linux reducing external fragmentation, and increasing performance. This new memory manager will also provide the ability to dynamically add memory to the hypervisor from the bfdriver, allowing us to reduce the size of the initial hypervisor, and better scale as the total number of CPUs increases.
Bareflank Hypervisor (rc2.0.4)
The Bareflank Team is proud to announce version rc2.0.4, with the following new features:
NOTE: This is a pre-release, and help is needed with testing, code review, documentation review, etc... Please give this release a try, and tell us what you think here or send in a PR.
New Build System
Round 3 for our build system, is now based on CMake. With our new build system, we no longer have a dependency on "bash" to compile our code (our dependencies like binutils and newlib still have this dependency). Build times are significantly improved, the source is far easier to read / modify, and we now have the ability to easily support additional architectures like ARM, which is currently being developed. Fingers crossed, this should be the last time we completely re-write the build system. For information on how to use this build system, please see the instructions on the main README.md and our example config
Reorganization
Most of the code has been reorganized and greatly simplified to make it easier to follow the source code, but also provide better support for projects like the hyperkernel without the need for as much duplication of code. To see the source code for the actual hypervisor, please see https://github.com/Bareflank/hypervisor/tree/master/bfvmm. The remaining source code provides support logic like the C runtime, Unwinder (for exception support), Intrinsics for Intel and ARM, the Bareflank Manager for starting / stopping the hypervisor from userspace (optional), the ELF loader, and SDK for various different headers that simplify development.
Delegates
We have moved a lot of the APIs from using inheritance, to using delegates. Inheritance is still used in some places, but switching to delegates has increased performance, reduced memory usage, and greatly simplified our APIs. In addition, the Extended APIs has a more comprehensive set of APIs that are easier to use as a result.
UEFI Support
UEFI support is being added, and will be completed for v2.0. With UEFI support, will provide better Type 1 support, allowing users to start Bareflank from UEFI, and then start your desired operating system including Windows and Linux.
Memory Management
Better memory management will be completed for v2.0. The new memory manager will be modeled after the SLAB / Buddy allocators in Linux reducing external fragmentation, and increasing performance. This new memory manager will also provide the ability to dynamically add memory to the hypervisor from the bfdriver, allowing us to reduce the size of the initial hypervisor, and better scale as the total number of CPUs increases.
Bareflank Hypervisor (rc2.0.3)
The Bareflank Team is proud to announce version rc2.0.3, with the following new features:
NOTE: This is a pre-release, and help is needed with testing, code review, documentation review, etc... Please give this release a try, and tell us what you think here or send in a PR.
New Build System
Round 3 for our build system, is now based on CMake. With our new build system, we no longer have a dependency on "bash" to compile our code (our dependencies like binutils and newlib still have this dependency). Build times are significantly improved, the source is far easier to read / modify, and we now have the ability to easily support additional architectures like ARM, which is currently being developed. Fingers crossed, this should be the last time we completely re-write the build system. For information on how to use this build system, please see the instructions on the main README.md and our example config
Reorganization
Most of the code has been reorganized and greatly simplified to make it easier to follow the source code, but also provide better support for projects like the hyperkernel without the need for as much duplication of code. To see the source code for the actual hypervisor, please see https://github.com/Bareflank/hypervisor/tree/master/bfvmm. The remaining source code provides support logic like the C runtime, Unwinder (for exception support), Intrinsics for Intel and ARM, the Bareflank Manager for starting / stopping the hypervisor from userspace (optional), the ELF loader, and SDK for various different headers that simplify development.
Delegates
We have moved a lot of the APIs from using inheritance, to using delegates. Inheritance is still used in some places, but switching to delegates has increased performance, reduced memory usage, and greatly simplified our APIs. In addition, the Extended APIs has a more comprehensive set of APIs that are easier to use as a result.
UEFI Support
UEFI support is being added, and will be completed for v2.0. With UEFI support, will provide better Type 1 support, allowing users to start Bareflank from UEFI, and then start your desired operating system including Windows and Linux.
Memory Management
Better memory management will be completed for v2.0. The new memory manager will be modeled after the SLAB / Buddy allocators in Linux reducing external fragmentation, and increasing performance. This new memory manager will also provide the ability to dynamically add memory to the hypervisor from the bfdriver, allowing us to reduce the size of the initial hypervisor, and better scale as the total number of CPUs increases.
Bareflank Hypervisor (rc2.0.2)
The Bareflank Team is proud to announce version rc2.0.2, with the following new features:
NOTE: This is a pre-release, and help is needed with testing, code review, documentation review, etc... Please give this release a try, and tell us what you think here or send in a PR.
New Build System
Round 3 for our build system, is now based on CMake. With our new build system, we no longer have a dependency on "bash" to compile our code (our dependencies like binutils and newlib still have this dependency). Build times are significantly improved, the source is far easier to read / modify, and we now have the ability to easily support additional architectures like ARM, which is currently being developed. Fingers crossed, this should be the last time we completely re-write the build system. For information on how to use this build system, please see the instructions on the main README.md and our example config
Reorganization
Most of the code has been reorganized and greatly simplified to make it easier to follow the source code, but also provide better support for projects like the hyperkernel without the need for as much duplication of code. To see the source code for the actual hypervisor, please see https://github.com/Bareflank/hypervisor/tree/master/bfvmm. The remaining source code provides support logic like the C runtime, Unwinder (for exception support), Intrinsics for Intel and ARM, the Bareflank Manager for starting / stopping the hypervisor from userspace (optional), the ELF loader, and SDK for various different headers that simplify development.
Delegates
We have moved a lot of the APIs from using inheritance, to using delegates. Inheritance is still used in some places, but switching to delegates has increased performance, reduced memory usage, and greatly simplified our APIs. In addition, the Extended APIs has a more comprehensive set of APIs that are easier to use as a result.
UEFI Support
UEFI support is being added, and will be completed for v2.0. With UEFI support, will provide better Type 1 support, allowing users to start Bareflank from UEFI, and then start your desired operating system including Windows and Linux.
Memory Management
Better memory management will be completed for v2.0. The new memory manager will be modeled after the SLAB / Buddy allocators in Linux reducing external fragmentation, and increasing performance. This new memory manager will also provide the ability to dynamically add memory to the hypervisor from the bfdriver, allowing us to reduce the size of the initial hypervisor, and better scale as the total number of CPUs increases.
Bareflank Hypervisor (rc2.0.1)
The Bareflank Team is proud to announce version rc2.0.1. This release is a minor tag in preparation of rc2.0.2.
Bareflank Hypervisor (v1.1.0)
The Bareflank Team is proud to announce version v1.1.0, with the following new features:
New Build System
A new build system was developed that supports out-of-tree compilation, better integration with extensions, and support for Docker. With Docker support, you no longer need to compile the cross compilers on Linux based systems. This not only provides a faster method for testing out Bareflank, but also speeds up our Travis CI builds reducing testing time. Local compilers are still recommended if you plan to do heavy development as they are faster.
Windows / OpenSUSE Support
Bareflank now supports Windows 8.1, Windows 10, and OpenSUSE Leap 42.2. Local compilers are required for Windows, and serial output does not work with Windows in VMWare (nested case), but works fine on real hardware. Extensive testing has been done with Windows including running benchmark programs while Bareflank is running as well as CPU-Z.
VMM Isolation
Like MoRE and SimpleVisor, Bareflank version 1.0 used the host OS's resources for execution. This included page tables, CR0, CR4, GDT, IDT, etc... Bareflank now has it's own set of resources providing isolation from the host OS like most traditional hypervisors. This provides the ability to map host / guest memory, as well as provides better security.
MultiCore Support
All of the cores are now used by Bareflank instead of just the bootstrap core (as was the case with version 1.0). To support multicore, mutex support was added to the hypervisor. Bareflank does not contain a scheduler, and thus, thread support is not provided, but std::mutex is via a simple spinlock to ensure coherency between cores. There are also a number of APIs to work with each core individually if needed.
VMCall Support
Bareflank now has generic support for VMCalls including version querying, raw register access, mapped memory, JSON commands, simple events and VMM unit testing. the Bareflank Manager (BFM) user space application has also been extended to provide command line access to these VMCalls, and the host OS drivers have also been updated to provide IOCTL support if direct VMCalls are not desired.
Clang / LLVM Support
Bareflank can now cross compile the VMM using Clang / LLVM. In addition, all of the libraries that are used including newlib and libc++ are compiled as shared libraries and linked as such.
Optimization Support
Bareflank now has support for SSE / AVX and "-O3" optimizations in the VMM.
Testing / GSL Support
Bareflank now supports a number of testing tools to ensure the source code works as advertised. This includes Coveralls support for code coverage, Static analysis via Clang Tidy and Coverity and dynamic analysis via Google Sanitizers. These tests are executed on each PR via Travis CI and AppVeyor to ensure the repo remains stable. Finally, Bareflank uses Clang Tidy to ensure C++ Core Guideline compliance, and has support for the Guideline Support Library.
Bareflank Hypervisor (rc1.1.0)
The Bareflank Team is proud to announce the release candidate version rc1.1.0, with the following new features:
NOTE: Help wanted for testing, code review, etc...
New Build System
A new build system was developed that supports out-of-tree compilation, better integration with extensions, and support for Docker. With Docker support, you no longer need to compile the cross compilers on Linux based systems. This not only provides a faster method for testing out Bareflank, but also speeds up our Travis CI builds reducing testing time. Local compilers are still recommended if you plan to do heavy development as they are faster.
Windows / OpenSUSE Support
Bareflank now supports Windows 8.1, Windows 10, and OpenSUSE Leap 42.2. Local compilers are required for Windows, and serial output does not work with Windows in VMWare (nested case), but works fine on real hardware. Extensive testing has been done with Windows including running benchmark programs while Bareflank is running as well as CPU-Z.
VMM Isolation
Like MoRE and SimpleVisor, Bareflank version 1.0 used the host OS's resources for execution. This included page tables, CR0, CR4, GDT, IDT, etc... Bareflank now has it's own set of resources providing isolation from the host OS like most traditional hypervisors. This provides the ability to map host / guest memory, as well as provides better security.
MultiCore Support
All of the cores are now used by Bareflank instead of just the bootstrap core (as was the case with version 1.0). To support multicore, mutex support was added to the hypervisor. Bareflank does not contain a scheduler, and thus, thread support is not provided, but std::mutex is via a simple spinlock to ensure coherency between cores. There are also a number of APIs to work with each core individually if needed.
VMCall Support
Bareflank now has generic support for VMCalls including version querying, raw register access, mapped memory, JSON commands, simple events and VMM unit testing. the Bareflank Manager (BFM) user space application has also been extended to provide command line access to these VMCalls, and the host OS drivers have also been updated to provide IOCTL support if direct VMCalls are not desired.
Clang / LLVM Support
Bareflank can now cross compile the VMM using Clang / LLVM. In addition, all of the libraries that are used including newlib and libc++ are compiled as shared libraries and linked as such.
Optimization Support
Bareflank now has support for SSE / AVX and "-O3" optimizations in the VMM.
Testing / GSL Support
Bareflank now supports a number of testing tools to ensure the source code works as advertised. This includes Coveralls support for code coverage, Static analysis via Clang Tidy and Coverity and dynamic analysis via Google Sanitizers. These tests are executed on each PR via Travis CI and AppVeyor to ensure the repo remains stable. Finally, Bareflank uses Clang Tidy to ensure C++ Core Guideline compliance, and has support for the Guideline Support Library.