I’m often asked if we at the Advanced Solutions Group office can install REDHAWK on a fill in the blank. And at this point it should come as no surprise that the tooling we turn to is from the Yocto Project. We’ve used it for the Ettus E310 and loads of Xilinx evaluation boards like the ZCU102. Lately though, I have been asked about the ZCU111 MPSoC, so for this post I’ll share how to do it using our forks of meta-xilinx and meta-xilinx-tools with Vivado 2018.2 (a process which can also target the ZCU102, zcu102-zynqmp
).
Or you can just download the compressed image, decompress it, and skip ahead to dd
‘ing the image.
Why Yocto
As a tool, Yocto can be challenging to setup and use, mainly because it has so much customization. However once configured, it’s a very powerful way to target several platforms (MACHINE types) without having to re-run the entire build again since you’re creating your own Linux distribution (and package management) that can cover each of those platforms.
Build Environment Setup
Now, because Yocto can be challenging, we generally like to use a repo manifest and pre-built template files to help bootstrap the environment for ourselves and customers. That route is very convenient but has the tendency to “look like magic” in that you can take for granted why each piece exists. So for this blog post we’ll lift the veil and do it manually.
NOTE: The use of the
meta-xilinx-tools
layer implies a few things. First, you have Vivado XXXX.Y installed. Second, you’re using a matchingrel-vXXXX.Y
branches ofmeta-xilinx
. Third, because we’re using 2018.2 (or .3), the Yocto Rocko (2.4.x) version is required.
Checklist:
- You have Vivado 2018.2 (or .3) installed and licensed to target the ZCU111.
- Your OS meets Yocto’s requirements for Rocko. (Even though it fails the sanity check, this procedure was run on Ubuntu 18.04.)
- Your computer has at least 16 GB RAM and 50 GB free hard drive space.
Setup your project area:
mkdir -p /some/project
cd /some/project
git clone --recursive -b rocko git://git.yoctoproject.org/poky
git clone -b rocko git://git.openembedded.org/meta-openembedded poky/meta-openembedded
git clone git://github.com/Geontech/meta-redhawk-sdr poky/meta-redhawk-sdr
git clone -b rel-v2018.2 git://github.com/Geontech/meta-xilinx poky/meta-xilinx
git clone -b rel-v2018.2 git://github.com/Geontech/meta-xilinx-tools poky/meta-xilinx-tools
TEMPLATECONF=`pwd`/poky/meta-redhawk-sdr/conf . poky/oe-init-build-env
# you're now in 'build'
NOTE: If you ever need to re-enter the build environment (because you closed your terminal, etc.), navigate to the directory containing
build
andpoky
and re-run:. poky/oe-init-build-env
.
Grab your favorite text editor and update your conf/bblayers.conf
file to also include Geon’s patched Xilinx layers:
BBLAYERS ?= " \
/some/project/poky/meta \
/some/project/poky/meta-poky \
/some/project/poky/meta-xilinx/meta-xilinx-bsp \
/some/project/poky/meta-xilinx-tools \
/some/project/poky/meta-openembedded/meta-oe \
/some/project/poky/meta-openembedded/meta-networking \
/some/project/poky/meta-openembedded/meta-python \
/some/project/poky/meta-openembedded/meta-filesystems \
/some/project/poky/meta-redhawk-sdr \
"
Also edit conf/local.conf
. Around line 38, replace the MACHINE
definition with MACHINE = "zcu111-zynqmp"
. Then identify where your Xilinx installation is located, for example:
XILINX_SDK_TOOLCHAIN ?= "/opt/Xilinx/SDK/2018.2"
NOTE: Our layer branches are
rel-v2018.2
since that is what we used at the time of the patches. You can safely use Vivado 2018.3 however with these patches as well.
For the sake of this build being to an SD card, consider also adding to conf/local.conf
the following lines to use WIC to generate a disk image (and an easily-sharable compressed one):
IMAGE_FSTYPES = "wic wic.xz"
WKS_FILES ?= "sdimage-bootpart.wks"
That’s it. You’re now ready to build.
Building
You’re now ready to start building packages. Over in meta-redhawk-sdr/recipes-core/images
you’ll find an image definition, redhawk-test-image.bb
, which pulls together a Domain, GPP, and all init.d
scripts necessary to boot the target as a stand-alone REDHAWK system, complete with some Components.
bitbake redhawk-test-image
This process will run for quite a while, potentially hours, detailing all of the packages being built according to the dependency graph of the image definition. Towards the end you’ll see a bit of WARNING spam from the sanity checkers about Xilinx’s packaging of the microblaze code necessary to boot the system. These can be ignored.
There are many outputs from this process including package repositories in build/tmp/deploy/rpm
specific to the aarch64
processor, the zcu111-zynqmp
board itself, etc. This is a good place to go looking if you’re building packages piecemeal to be copied to the target for iteratively testing a design or for exposing it as a repository if you want to provide package updates for your target(s) as time goes on.
Another output you probably want is the disk image, which you can find in build/tmp/deploy/images/zcu111-zynqmp
as redhawk-test-image-zcu111-zynqmp.wic
. Write that image to your SD card (as seen from the build
directory):
sudo dd if=tmp/deploy/images/zcu111-zynqmp/redhawk-test-image-zcu111-zynqmp.wic of=/dev/sdX && sync
NOTE: Your
/dev/sdX
will vary based on your system configuration, mmc, etc. Often runningdmesg
after inserting the SD card can provide a clue as to what device it is.
Booting the ZCU111
The ZCU111 needs to be configured for booting from the SD card: set the SW6 switch bank to ON
, OFF
, OFF
, OFF
(switches 1, 2, 3, 4, respectively).
Connect your USB serial cable to the device. On my system, the serial console numerated to /dev/ttyUSB1
, to which I attached using: screen /dev/ttyUSB1 115200
.
Power-on the board and observe your console output. You should see the Das U-Boot loader, then the kernel, and eventually the login prompt. This build was using the default configuration, so root does not have a password.
The redhawk-test-image
we built provides init.d
scripts to start OmniORB, OmniEvents, REDHAWK_DEV
Domain and a GPP node. It also provides most of the REDHAWK Components, but none of the default waveforms (since most require the data converter, which requires an Intel processor). You can verify things are running by the Python sandbox:
>>> from ossie.utils import redhawk
>>> dom = redhawk.attach()
>>> dom.name
'REDHAWK_DEV'
>>> [d.name for d in dom.devices]
['GPP']
If you’re looking for something else to do, the Components use the standard XML provided by REDHAWK, so the IDs are the same. Therefore you can put together waveforms from your development workstation, copy them to the target’s SDRROOT, and launch them this way too.
Integration
If you’re more interested in integrating the ZCU111 with Omni services and the Domain running elsewhere, disable the related /etc/init.d
scripts, edit the /etc/omniORB.cfg
file accordingly, and reboot. It should join the remote Domain (as long as it’s called REDHAWK_DEV
, of course…modify the node XML if you need to change it). Later when you define your own image definition, you can simply omit those packages and append the node-deployer
recipe to build the node with a different Domain name.
Now, if you are integrating with a remote Domain, you likely want to install your cross-compiled Components and SoftPkgs into its SDRROOT so that REDHAWK can deploy them to any aarch64
GPP. We’ve made some progress towards making this more simple, but here is the gist:
You can copy the target’s SDRROOT contents like dom/mgr/rh/ComponentHost
, dom/components
, etc. to that remote Domain and use meta-redhawk-sdr/scripts/spd_utility
to merge them with their counterparts. The script was run during the build to move the binaries and add implementation
definitions pointing to cpp-aarch64
so they won’t collide with the defaults if you run the script again (it has a help menu).
This also means if you also build for a standard Zynq part, you’ll end up with cpp-armv7l
, for example, and could wind up with a Domain deploying rh.psd
to x86_64
as well as aarch64
and armv7l
, at which point you could have a Waveform being executed by a heterogenous set of processors without thinking about it.
And that’s pretty neat.
Conclusion
Hopefully at this point you have a working build environment that can target the ZCU111 (and ZCU102). If you have cpp
-implementation Devices, Components, and SoftPkgs, feel free to setup package recipes that inherit from redhawk-device
, for example (meta-redhawk-sdr/classes
) to help simplify the recipe down to the basics, like: 1. SRC_URI
to fetch your source code, 2. S
to get into the cpp
directory, and the other bitbake-required elements like SUMMARY
and LICENSE
. You can then bitbake your-package
and RPM-install the resulting package (build/tmp/deploy/rpm/aarch64
) on the target.