3

Building V8 on an M1 MacBook

 2 years ago
source link: https://joyeecheung.github.io/blog/2021/08/27/binding-v8-on-an-m1-macbook/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Building V8 on an M1 MacBook

I’ve recently got an M1 MacBook and played around with it a bit. It seems many open source projects still haven’t added MacOS with ARM64 into their support matrix, requiring a few extra steps to work properly, and V8 is no exception. Here are the steps I’ve taken to get V8 building on a M1 MacBook and hopefully it could help someone else on the Internet.

Setting up the build environment

First, download the depot_tools and bootstrap it as usual. Assuming you place your all projects under a $WORKSPACE_DIR (which is what I tend to do):

cd $WORKSPACE_DIR
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git

# To make the depot_tool commands e.g. fetch available
export PATH=$WORKSPACE_DIR/depot_tools:$PATH
# Optionally, add this to your ~/.zshrc if you are using zsh, or any
# other equivalents
echo "export PATH=$WORKSPACE_DIR/depot_tools:\$PATH" >> ~/.zshrc

# Bootstrap depot_tools
gclient

Next, fetch V8:

# Create the parent folder for V8
mkdir $WORKSPACE_DIR/v8

# The following are necessary to make `gclient sync` work on MacOS ARM64,
# otherwise you'd see vpython errors
cd $WORKSPACE_DIR/v8
echo "mac-arm64" > .cipd_client_platform
export VPYTHON_BYPASS="manually managed python not supported by chrome operations"
# Optionally, add this to your ~/.zshrc if you are using zsh, or any
# other equivalents
echo "export VPYTHON_BYPASS=\"manually managed python not supported by chrome operations\"" >> ~/.zshrc

fetch v8

Creating V8 build configs

tools/dev/v8gen.py doesn’t seem to work on MacOS ARM64 yet - it keeps putting target_cpu = "x64" into the config which is in conflict with the v8_target_cpu = "arm64" that it will generate when you pass arm64 as the architecture to it. So I ended up just creating the configs manually myself. For debug builds:

mkdir -p out.gn/arm64.debug/
cat >> out.gn/arm64.debug/args.gn <<EOF

is_debug = true
target_cpu = "arm64"
v8_enable_backtrace = true
v8_enable_slow_dchecks = true
v8_optimized_debug = false
v8_target_cpu = "arm64"

v8_enable_trace_ignition=true
cc_wrapper="ccache"
EOF

For release builds:

mkdir -p out.gn/arm64.release/
cat >> out.gn/arm64.release/args.gn <<EOF

dcheck_always_on = false
is_debug = false
target_cpu = "arm64"
v8_target_cpu = "arm64"

cc_wrapper="ccache"
EOF

# Generate the build files
gn gen out.gn/arm64.debug

v8_enable_trace_ignition=true (which gives you a nice trace when you pass --trace-ignition to d8) and cc_wrapper="ccache" (which enables ccache integration, see Chromium’s guide on how to use ccache) are what I tend to use myself, but they are not musts. For optimized debug builds you just need to turn on v8_optimized_debug and tweak other configs as you see fit.

Building V8

As explained earlier I usually use ccache when building V8, so I’d first do

export CCACHE_CPP2=yes
export CCACHE_SLOPPINESS=time_macros

# Optionally, add this to your ~/.zshrc if you are using zsh, or any
# other equivalents
echo "export CCACHE_CPP2=yes" >> ~/.zshrc
echo "export CCACHE_SLOPPINESS=time_macros" >> ~/.zshrc

And then per the instructions of the Chromium ccache guide, prefix the $PATH variable before I run ninja to build:

PATH=`pwd`/third_party/llvm-build/Release+Asserts/bin:$PATH ninja -C out.gn/arm64.release

Just to check that it’s working:

python2 tools/run-tests.py --outdir=out.gn/arm64.release --quickcheck

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK