Saturday, December 5, 2020

How to create custom docker image for arm64

(1) When starting to use docker for arm64 architecture e.g. on M1, you might notice that there are missiing custom docker image for arm64, so there is a need to build custom image for self.

(2) When there is docker image for AMD64, you can pull them and use docker history --no-trunc to view the build commands

(3) And then create a Dockerfile to build it in your arm64 environment, it is also possibe to cross compile it in AMD64 CPU environment.

(4) For example, the creation Dockerfile to build for Quantlib juypter notebook server is as below.
P.S. You need more RAM to build using gcc, preferably 4GB to 8GB

Shell script   Select all
cd $HOME mkdir -p my-quantlib cd my-quantlib # get helloworld.ipynb wget https://raw.githubusercontent.com/lballabio/dockerfiles/master/quantlib-jupyter/Hello%20world.ipynb cat >$HOME/my-quantlib/Dockerfile_ql_1.20 <<'HEREEOF' # Dockerfile_ql_1.20 # docker build -f Dockerfile_ql_1.20 -t arm64v8/quantlib:1.20 . # docker buildx build --platform linux/arm64 -t arm64v8/quantlib:1.20 . # Build Quantlib libraries for arm64v8 ARG tag=latest FROM arm64v8/ubuntu:20.04 MAINTAINER Luigi Ballabio <luigi.ballabio@gmail.com> LABEL Description="Provide a building environment where the QuantLib Python jupyter-notebook" RUN apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential wget libbz2-dev vim git ENV boost_version=1.67.0 ENV boost_dir=boost_1_67_0 # Build boost RUN echo 'Building boost ...' #RUN wget https://dl.bintray.com/boostorg/release/${boost_version}/source/${boost_dir}.tar.gz \ RUN wget https://nchc.dl.sourceforge.net/project/boost/boost/${boost_version}/${boost_dir}.tar.gz \ && tar xfz ${boost_dir}.tar.gz \ && rm ${boost_dir}.tar.gz \ && cd ${boost_dir} \ && ./bootstrap.sh \ && ./b2 --without-python --prefix=/usr -j 4 link=shared runtime-link=shared install \ && cd .. && rm -rf ${boost_dir} && ldconfig # Build Quantlib C++ RUN echo 'Building Quantlib C++ ...' ENV quantlib_version=1.20 #RUN wget https://dl.bintray.com/quantlib/releases/QuantLib-${quantlib_version}.tar.gz \ RUN wget https://github.com/lballabio/QuantLib/releases/download/QuantLib-v1.20/QuantLib-${quantlib_version}.tar.gz \ && tar xfz QuantLib-${quantlib_version}.tar.gz \ && rm QuantLib-${quantlib_version}.tar.gz \ && cd QuantLib-${quantlib_version} \ && ./configure --prefix=/usr --disable-static CXXFLAGS=-O3 \ && make -j 4 && make check && make install \ && make clean \ && cd .. && ldconfig # && cd .. && rm -rf QuantLib-${quantlib_version} && ldconfig # Build Quantlib-Python RUN echo 'Build Quantlib-Python ...' RUN apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y swig python3 python3-pip python-dev libgomp1 # Build Quantlib for Python3 RUN echo 'Install Quantlib Python' ENV quantlib_swig_version=1.20 #RUN wget https://dl.bintray.com/quantlib/releases/QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ RUN wget https://github.com/lballabio/QuantLib-SWIG/releases/download/QuantLib-SWIG-v${quantlib_swig_version}/QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ && tar xfz QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ && rm QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ && cd QuantLib-SWIG-${quantlib_swig_version} \ && ./configure CXXFLAGS="--param ggc-min-expand=1 --param ggc-min-heapsize=32768" PYTHON=/usr/bin/python3 \ && make -C Python && make -C Python check && make -C Python install \ && cd .. && rm -rf QuantLib-SWIG-${quantlib_swig_version} && ldconfig # Build jupyter-notebook server RUN python3 -c "print('\033[91m Building jupyter-notebook server ... \033[0m')" RUN pip3 install --no-cache-dir jupyter jupyterlab matplotlib numpy scipy pandas ipywidgets RISE RUN jupyter-nbextension install rise --py --sys-prefix RUN jupyter-nbextension install widgetsnbextension --py --sys-prefix \ && jupyter-nbextension enable widgetsnbextension --py --sys-prefix # Build Quantlib for Python2 RUN apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y python \ && apt-get clean RUN wget https://bootstrap.pypa.io/pip/2.7/get-pip.py \ && python2 get-pip.py \ && rm get-pip.py #RUN wget https://dl.bintray.com/quantlib/releases/QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ RUN wget https://github.com/lballabio/QuantLib-SWIG/releases/download/QuantLib-SWIG-v${quantlib_swig_version}/QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ && tar xfz QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ && rm QuantLib-SWIG-${quantlib_swig_version}.tar.gz \ && cd QuantLib-SWIG-${quantlib_swig_version} \ && ./configure CXXFLAGS="--param ggc-min-expand=1 --param ggc-min-heapsize=32768" \ && make -C Python && make -C Python check && make -C Python install \ && cd .. && rm -rf QuantLib-SWIG-${quantlib_swig_version} && ldconfig RUN pip2 install --no-cache-dir numpy EXPOSE 8888 RUN mkdir /notebooks VOLUME /notebooks COPY *.ipynb /notebooks/ # Starting jupyter-notebook server RUN python3 -c "print('\033[92m Starting jupyter-notebook server at port 8888 \033[0m')" CMD jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --notebook-dir=/notebooks HEREEOF # build image docker build -f Dockerfile_ql_1.20 -t arm64v8/quantlib:1.20 . # run image docker run -d -p 8888:8888 --name myquantlibtesting arm64v8/quantlib:1.20 # list the token of the jupyter-notebook server docker container exec -it myquantlibtesting jupyter notebook list


(5) Testing QuantLib C++ libraries and Quantlib for Python2 and Python3
Shell script   Select all
#create and start container for testing docker run -it --rm --name myquantlib arm64v8/quantlib:1.20 /bin/bash #Create testql.cpp cd $HOME cat > testql.cpp << 'testqlEOF' #include <ql/quantlib.hpp> int main() { std::cout << "BOOST version is " << BOOST_VERSION << std::endl; std::cout << "QL version is " << QL_VERSION << std::endl; #if __x86_64__ || __WORDSIZE == 64 std::cout << "This is 64 bits" << std::endl; #elif __i386__ || __WORDSIZE == 32 std::cout << "This is 32 bits" << std::endl; #else std::cout << "This is something else" << std::endl; #endif return 0; } testqlEOF g++ testql.cpp -lQuantLib -o testql ./testql # Test QuantLib C++ Examples cd $HOME g++ /QuantLib-*/Examples/Bonds/Bonds.cpp -lQuantLib -o testBonds ./testBonds cd $HOME g++ /QuantLib-*/Examples/FRA/FRA.cpp -lQuantLib -o testFRA ./testFRA # Test python 3 QuantLib cd $HOME cat > $HOME/swap.py <<EOF from __future__ import print_function import numpy as np import QuantLib as ql print("QuantLib version is", ql.__version__) # Set Evaluation Date today = ql.Date(31,3,2015) ql.Settings.instance().setEvaluationDate(today) # Setup the yield termstructure rate = ql.SimpleQuote(0.03) rate_handle = ql.QuoteHandle(rate) dc = ql.Actual365Fixed() disc_curve = ql.FlatForward(today, rate_handle, dc) disc_curve.enableExtrapolation() hyts = ql.YieldTermStructureHandle(disc_curve) discount = np.vectorize(hyts.discount) start = ql.TARGET().advance(today, ql.Period('2D')) end = ql.TARGET().advance(start, ql.Period('10Y')) nominal = 1e7 typ = ql.VanillaSwap.Payer fixRate = 0.03 fixedLegTenor = ql.Period('1y') fixedLegBDC = ql.ModifiedFollowing fixedLegDC = ql.Thirty360(ql.Thirty360.BondBasis) index = ql.Euribor6M(ql.YieldTermStructureHandle(disc_curve)) spread = 0.0 fixedSchedule = ql.Schedule(start, end, fixedLegTenor, index.fixingCalendar(), fixedLegBDC, fixedLegBDC, ql.DateGeneration.Backward, False) floatSchedule = ql.Schedule(start, end, index.tenor(), index.fixingCalendar(), index.businessDayConvention(), index.businessDayConvention(), ql.DateGeneration.Backward, False) swap = ql.VanillaSwap(typ, nominal, fixedSchedule, fixRate, fixedLegDC, floatSchedule, index, spread, index.dayCounter()) engine = ql.DiscountingSwapEngine(ql.YieldTermStructureHandle(disc_curve)) swap.setPricingEngine(engine) print(swap.NPV()) print(swap.fairRate()) EOF # Test python3 cd $HOME python3 swap.py # Test python2 cd $HOME git clone git://github.com/mmport80/QuantLib-with-Python-Blog-Examples.git cd QuantLib-with-Python-Blog-Examples/ python2 blog_frn_example.py cd $HOME python2 swap.py


Thursday, May 21, 2020

Personal Installation Guide of Raspberry Pi 4B cluster

Why this project with Raspberry Pi cluster ? Because it is cheap and can be used for the many purposes as below.

Hardware

4 x Raspberry Pi 4B with heat sinks
Raspberry Pi Cluster Case 4 layers with Cooling Fan for each layer
4 x MicroSDHC SanDisk 32G Class 10
One MicroSD Adapter for installation of OS
4 x USB-C power cable
4 x Cat 6 LAN cable
USB power supply with 8 USB ports total max 10A
External USB fans connected to USB power supply, important to keep the CPU cool especially when overclock
4 x UPS Battery Case 5V max 3.3A (each with 3 x Panasonic 18650BD 3200mAH batteries)
8 ports Gigabit Ethernet Switch
External RAID-0 disks with 2 x 8TB (WD Ultrastar HC320 7200rpm) storage, USB-C to USB3.0 interface (HD is the most expensive item for this project)
cat /proc/cpuinfo    Select all
Reference : https://www.rs-online.com/designspark/raspberry-pi-3-model-b-vs-3-model-b processor : 0 BogoMIPS : 108.00 Features : fp asimd evtstrm crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 processor : 1 BogoMIPS : 108.00 Features : fp asimd evtstrm crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 processor : 2 BogoMIPS : 108.00 Features : fp asimd evtstrm crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 processor : 3 BogoMIPS : 108.00 Features : fp asimd evtstrm crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 Hardware : BCM2835 Revision : c03112 Serial : 100000003bc32951 Model : Raspberry Pi 4 Model B Rev 1.2


SD Card images for BerryBoot

BerryBoot(very flexible and allow to add custom OS images for multiboot)
https://www.berryterminal.com/doku.php/berryboot

OS : Ubuntu 18.04.3 LTS
Download from Ubuntu_Server_arm64_18.04.3.img
https://sourceforge.net/projects/berryboot/files/os_images/
or raspberry pi image from
https://wiki.ubuntu.com/ARM/RaspberryPi
or download the latest image for raspi4 and convert to berryboot format as below
shell script    Select all
nohup curl -OL http://cdimage.ubuntu.com/ubuntu/releases/18.04.4/release/ubuntu-18.04.4-preinstalled-server-arm64+raspi4.img.xz & # or download the 32 bits version # nohup curl -OL http://cdimage.ubuntu.com/ubuntu/releases/18.04.4/release/ubuntu-18.04.4-preinstalled-server-armhf+raspi4.img.xz & sudo apt update sudo apt install kpartx squashfs-tools unxz ubuntu-18.04.4-preinstalled-server-arm64+raspi4.img.xz # And follow this guide to create your own berry boot images. # https://www.berryterminal.com/doku.php/berryboot/adding_custom_distributions # Convert arm64+raspi4 to berryboot OS image sudo kpartx -av ubuntu-18.04.4-preinstalled-server-arm64+raspi4.img #sudo mount /dev/mapper/loop0p2 /mnt sudo mount /dev/mapper/loop1p2 /mnt sudo sed -i 's/^\/dev\/mmcblk/#\0/g' /mnt/etc/fstab sudo sed -i 's/^PARTUUID/#\0/g' /mnt/etc/fstab sudo rm -f /mnt/etc/console-setup/cached_UTF-8_del.kmap.gz sudo rm -f /mnt/etc/systemd/system/multi-user.target.wants/apply_noobs_os_config.service sudo rm -f /mnt/etc/systemd/system/multi-user.target.wants/raspberrypi-net-mods.service sudo rm -f /mnt/etc/rc3.d/S01resize2fs_once sudo mksquashfs /mnt Ubuntu_Server_arm64_18.04.4_raspi4.img -comp lzo -e lib/modules sudo umount /mnt sudo kpartx -d ubuntu-18.04.4-preinstalled-server-arm64+raspi4.img # Convert armhf+raspi4 to berryboot OS image unxz ubuntu-18.04.4-preinstalled-server-armhf+raspi4.img.xz sudo kpartx -av ubuntu-18.04.4-preinstalled-server-armhf+raspi4.img sudo mount /dev/mapper/loop1p2 /mnt sudo sed -i 's/^\/dev\/mmcblk/#\0/g' /mnt/etc/fstab sudo sed -i 's/^PARTUUID/#\0/g' /mnt/etc/fstab sudo rm -f /mnt/etc/console-setup/cached_UTF-8_del.kmap.gz sudo rm -f /mnt/etc/systemd/system/multi-user.target.wants/apply_noobs_os_config.service sudo rm -f /mnt/etc/systemd/system/multi-user.target.wants/raspberrypi-net-mods.service sudo rm -f /mnt/etc/rc3.d/S01resize2fs_once sudo mksquashfs /mnt Ubuntu_Server_armhf_18.04.4_raspi4.img -comp lzo -e lib/modules sudo umount /mnt sudo kpartx -d ubuntu-18.04.4-preinstalled-server-armhf+raspi4.img Download links for these 2 converted images and other updated Raspbian images are on the right hand sidebar of this blog. # BerryBoot way to change default OS images on reboot # Reference : https://www.raspberrypi.org/forums/viewtopic.php?t=37861 # Reference : https://yoursunny.com/t/2017/berryboot-reboot-into/


Ubuntu Server image setup

shell script    Select all
plug in the Ethernet cable before boot up login: ubuntu password: ubuntu change ubuntu password once login # Update Server Security # Reference : https://www.raspberrypi.org/documentation/configuration/security.md sudo apt install openssh-server # check hostname hostnamectl # check network interface ifconfig # update some packages sudo apt update sudo apt-get install dpkg sudo apt-get install --reinstall python3-minimal python3-lockfile sudo apt-get install --reinstall python3-twisted sudo apt-get install --reinstall python3 python3-pip sudo apt-get install --reinstall python-minimal python-lockfile sudo apt-get install --reinstall python python-pip # change hostname, change to pi01, pi02 ... # Reference : https://linuxize.com/post/how-to-change-hostname-on-ubuntu-18-04/ sudo hostnamectl set-hostname pi01 hostnamectl # change timezone sudo dpkg-reconfigure tzdata # change eth0 to Static IP # Reference : https://linuxconfig.org/how-to-configure-static-ip-address-on-ubuntu-18-04-bionic-beaver-linux cat /etc/netplan/01-netcfg.yaml network: version: 2 renderer: networkd ethernets: eth0: dhcp4: no addresses: - 10.0.1.XXX/24 gateway4: 10.0.0.1 nameservers: addresses: [8.8.8.8, 1.1.1.1] # Once ready apply changes with: sudo netplan apply # Mount NTFS external raid disk and Install NFS Server # Reference : https://www.tecmint.com/install-nfs-server-on-ubuntu/ # Reference : https://vitux.com/install-nfs-server-and-client-on-ubuntu/ # Check UUID or PARTUUID sudo blkid # add this in /etc/fstab, for example PARTUUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" /media/RAID0WD ntfs defaults,nls=utf8,dmask=0000,fmask=0022,uid=1000,gid=1000,windows_names 0 0 # reboot to check mounted disk sudo reboot # After reboot df -h # add this in /etc/exports, for example /media/RAID0WD 10.0.1.0/24(rw,sync,no_root_squash,no_subtree_check,insecure,anonuid=1000,anongid=1000) echo "/media/RAID0WD 10.0.1.0/24(rw,sync,no_root_squash,no_subtree_check,insecure,anonuid=1000,anongid=1000)" | sudo tee -a /etc/exports # Export and restart NFS Server sudo exportfs -a sudo systemctl restart nfs-kernel-server # Allow nfs on firewall sudo ufw allow from 10.0.1.0/24 to any port nfs showmount -e pi01 # Install Raspberry pi bin and check cpu temperature sudo add-apt-repository ppa:ubuntu-raspi2/ppa sudo apt-get update sudo apt-get install libraspberrypi-bin # My CPU temp=38.0'C vcgencmd measure_temp # Mount NFS from Mac OS X Connect to Server, enter nfs://10.0.1.101/media/RAID0WD # Mount NFS from other Ubuntu nodes # Reference : https://www.raspberrypi.org/documentation/configuration/nfs.md # add this in /etc/fstab, for example 10.0.1.101:/media/RAID0WD /mnt/RAID0WD nfs auto 0 0 echo "10.0.1.101:/media/RAID0WD /mnt/RAID0WD nfs auto 0 0" | sudo tee -a /etc/fstab ## Install Samba for Ubuntu Server # Reference https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/ # put this in /etc/samba/smb.conf, for example [raidshare] path = /media/RAID0WD browseable = yes guest ok = no read only = no force create mode = 0660 force directory mode = 2770 valid users = ubuntu @ubuntu ## Add Samba password for user ubuntu sudo smbpasswd -a ubuntu # Restart Samba Server sudo systemctl restart smbd # Allow samba on firewall sudo ufw allow 'Samba' # create images folder for berryboot images installation for other nodes cd $HOME ln -sf /media/RAID0WD smb_share cd $HOME/smb_share mkdir -p images # Download required berryboot os images nohup curl -L https://sourceforge.net/projects/berryboot/files/os_images/Ubuntu_Server_arm64_18.04.3.img/download -o Ubuntu_Server_arm64_18.04.3.img & # Check sha1 signature openssl sha1 Ubuntu_Server_arm64_18.04.3.img


Raspbian Buster image setup

shell script    Select all
plug in the Ethernet cable before boot up login: pi password: raspberry change password once login # Reference : https://www.raspberrypi.org/documentation/configuration/security.md sudo apt install openssh-server # check hostname hostname # check network interface ifconfig # change hostname, change to pi01, pi02 ... etc sudo raspi-config -> Select 2. Network Options -> Select N1 Hostname # Enable SSH at the command line using raspi-config sudo raspi-config -> Select 5. Interfacing Options -> Select P2 SSH -> Select Yes # VNC Server at the command line using raspi-config sudo raspi-config -> Select 5. Interfacing Options -> Select P3 VNC -> Select Yes # change timezone sudo dpkg-reconfigure tzdata -> Select the timezone # change locales sudo dpkg-reconfigure locales -> Select the locale # change eth0 to Static IP # Reference : https://pimylifeup.com/raspberry-pi-static-ip-address/ sudo vi /etc/dhcpcd.conf # Restart dhcp sudo service dhcpcd restart # you will have 2 IP addresses, which is good for mounting nfs at start hostname -I # if you have 2 ip addresses and would like to stop the dhcp for eth0, see discussions here. https://raspberrypi.stackexchange.com/questions/52010/set-static-ip-and-stop-dhcp-on-jessie-lite # Mount NTFS external raid disk and Install NFS Server for Raspbian Buster # Check UUID or PARTUUID sudo blkid # add this in /etc/fstab, for example PARTUUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" /media/RAID0WD ntfs defaults,nls=utf8,dmask=0000,fmask=0022,uid=1000,gid=1000,windows_names 0 0 # reboot to check mounted disk sudo reboot # After reboot df -h # Install NFS Server for Raspbian Buster sudo apt install nfs-kernel-server # add this in /etc/exports, for example /media/RAID0WD 10.0.1.0/24(rw,sync,no_root_squash,no_subtree_check,insecure,anonuid=1000,anongid=1000) echo "/media/RAID0WD 10.0.1.0/24(rw,sync,no_root_squash,no_subtree_check,insecure,anonuid=1000,anongid=1000)" | sudo tee -a /etc/exports # Export and restart NFS Server sudo exportfs -a sudo systemctl restart nfs-kernel-server showmount -e pi01 # Mount NFS from Mac OS X Connect to Server, enter nfs://10.0.1.101/media/RAID0WD # Mount NFS from other Raspbian nodes # Reference : https://www.raspberrypi.org/documentation/configuration/nfs.md # add this in /etc/fstab, for example 10.0.1.101:/media/RAID0WD /mnt/RAID0WD nfs auto 0 0 echo "10.0.1.101:/media/RAID0WD /mnt/RAID0WD nfs auto 0 0" | sudo tee -a /etc/fstab # Install Samba for Raspbian Buster sudo apt install samba # put this in /etc/samba/smb.conf, for example [raidshare] path = /media/RAID0WD browseable = yes guest ok = no read only = no force create mode = 0660 force directory mode = 2770 valid users = pi @pi # Add Samba password for user pi sudo smbpasswd -a pi # Restart Samba Server sudo systemctl restart smbd # create images folder for berryboot images installation for other nodes cd $HOME ln -sf /media/RAID0WD smb_share cd $HOME/smb_share mkdir -p images # Download required berryboot os images nohup curl -L https://sourceforge.net/projects/berryboot/files/os_images/Debian_Buster_Raspbian_FULL_2019.10.img/download -o Debian_Buster_Raspbian_FULL_2019.10.img & nohup curl -L https://sourceforge.net/projects/berryboot/files/os_images/Debian_Buster_Raspbian_2019.10.img/download -o Debian_Buster_Raspbian_2019.10.img & # Check sha1 signature openssl sha1 Debian_Buster_Raspbian_FULL_2019.10.img openssl sha1 Debian_Buster_Raspbian_2019.10.img


Nodes pi01 pi02 pi03 pi04 setup

shell script    Select all
Reference : https://magpi.raspberrypi.org/articles/build-a-raspberry-pi-cluster-computer Install OS images to other nodes, change hostname and assign fixed IP address for each node. # Mount NFS for other nodes # Reference : https://www.raspberrypi.org/documentation/configuration/nfs.md # Install package sudo apt install nfs-common # add this in /etc/fstab, for pi02, pi03, pi04 nodes and reboot to be effective 10.0.1.101:/media/RAID0WD /mnt/RAID0WD nfs auto 0 0 echo "10.0.1.101:/media/RAID0WD /mnt/RAID0WD nfs auto 0 0" | sudo tee -a /etc/fstab # Generate ssh key copy it to every other node in the cluster # Reference : http://www.linuxproblem.org/art_9.html # Login pi01 ssh pi@10.0.1.101 ssh-keygen -t rsa ssh-copy-id 10.0.1.102 ssh-copy-id 10.0.1.103 ssh-copy-id 10.0.1.104 exit # Login pi02 ssh pi@10.0.1.102 ssh-keygen -t rsa ssh-copy-id 10.0.1.101 ssh-copy-id 10.0.1.103 ssh-copy-id 10.0.1.104 exit # Login pi03 ssh pi@10.0.1.103 ssh-keygen -t rsa ssh-copy-id 10.0.1.101 ssh-copy-id 10.0.1.102 ssh-copy-id 10.0.1.104 exit # Login pi04 ssh pi@10.0.1.104 ssh-keygen -t rsa ssh-copy-id 10.0.1.101 ssh-copy-id 10.0.1.102 ssh-copy-id 10.0.1.103 exit Install MPI for each node pi01, pi02, pi03, pi04 nodes sudo apt install mpich python3-mpi4py sudo apt install python-mpi4py ##Test01 # Running in pi01 mpirun -n 4 -host 10.0.1.101,10.0.1.102,10.0.1.102,10.0.1.104 hostname ##Test02 # Running in pi01 mkdir -p /media/RAID0WD/Projects ln -sf /media/RAID0WD/Projects $HOME #Create shell script as temp.sh in $HOME/Projects folder cat > $HOME/Projects/temp.sh <<EOF #!/bin/sh echo "\$(hostname)" "\$(vcgencmd measure_temp)" EOF chmod +x $HOME/Projects/temp.sh # create a common Projects folder in all other nodes # all nodes pi02 to pi04 create folder link and assume mounted NFS from pi01 ln -sf /mnt/RAID0WD/Projects $HOME/ # Running in pi01 to pi04 ssh pi@10.0.1.101 mpirun -n 4 -host 10.0.1.101,10.0.1.102,10.0.1.102,10.0.1.104 $HOME/Projects/temp.sh ssh pi@10.0.1.102 mpirun -n 4 -host 10.0.1.101,10.0.1.102,10.0.1.102,10.0.1.104 $HOME/Projects/temp.sh ssh pi@10.0.1.103 mpirun -n 4 -host 10.0.1.101,10.0.1.102,10.0.1.102,10.0.1.104 $HOME/Projects/temp.sh ssh pi@10.0.1.104 mpirun -n 4 -host 10.0.1.101,10.0.1.102,10.0.1.102,10.0.1.104 $HOME/Projects/temp.sh ##Test03 # Add this in /etc/hosts for all nodes pi01, pi02, pi03, pi04 ssh pi@10.0.1.101 echo -e "10.0.1.101\tpi01\n10.0.1.102\tpi02\n10.0.1.103\tpi03\n10.0.1.104\tpi04" | sudo tee -a /etc/hosts ssh pi@10.0.1.102 echo -e "10.0.1.101\tpi01\n10.0.1.102\tpi02\n10.0.1.103\tpi03\n10.0.1.104\tpi04" | sudo tee -a /etc/hosts ssh pi@10.0.1.103 echo -e "10.0.1.101\tpi01\n10.0.1.102\tpi02\n10.0.1.103\tpi03\n10.0.1.104\tpi04" | sudo tee -a /etc/hosts ssh pi@10.0.1.104 echo -e "10.0.1.101\tpi01\n10.0.1.102\tpi02\n10.0.1.103\tpi03\n10.0.1.104\tpi04" | sudo tee -a /etc/hosts #Running on any node cd $HOME/Projects curl -OL https://raw.githubusercontent.com/mpi4py/mpi4py/master/demo/helloworld.py mpirun -n 4 -host pi01,pi02,pi03,pi04 python $HOME/Projects/helloworld.py ##Test04 #Running on exactly 2 processes only cd $HOME/Projects curl -OL https://raw.githubusercontent.com/mpi4py/mpi4py/master/demo/osu_bw.py mpirun -n 2 -host pi01,pi02 python $HOME/Projects/osu_bw.py # create a file $HOME/Projects/4bmachinelist with the list of available pi 4b nodes for mpi # It is not advised to mix pi 4b with 3b together to run mpi, as it will degrade the performance. mpirun -n 2 -machinefile $HOME/Projects/4bmachinelist python $HOME/Projects/osu_bw.py #On each node, launch 1 process only mpirun -npernode 1 -machinefile $HOME/Projects/4bmachinelist $HOME/Projects/temp.sh # -N is same as -npernode, -hostfile is same as -machinefile mpirun -N 1 -hostfile $HOME/Projects/4bmachinelist $HOME/Projects/temp.sh # or mpirun -N 1 -hostfile $HOME/Projects/4bmachinelist bash -c 'echo "$(hostname)" "$(vcgencmd measure_temp)"' | sort ##Test05 # Count how many processes for your cluster, you should get 4 x 4 nodes = 16 processes cd $HOME/Projects curl -L https://github.com/Apress/raspberry-pi-supercomputing/archive/master.zip -o supercomputing.zip unzip supercomputing.zip mpirun -hostfile 4bmachinelist -N 1 python3 $HOME/Projects/raspberry-pi-supercomputing-master/Codes/code/chapter08/prog01.py mpirun -hostfile 4bmachinelist -N 4 python3 $HOME/Projects/raspberry-pi-supercomputing-master/Codes/code/chapter08/prog03.py ##Test06 # Calculate primes # Get the source code from curl -OL https://people.sc.fsu.edu/~jburkardt/py_src/prime_mpi/prime_mpi.py # There are 2 errors to fix before using it # Line 41, there is a missing closing bracket ) at the end # Line 74, should be changed from comm.Reduce ( [ t, MPI.DOUBLE ], [ primes, MPI.INT ], op = MPI.SUM, root = 0 ) to primes = comm.reduce ( t, op = MPI.SUM, root = 0 ) # Test the time required among different processes by running the below # assuming maximum 4 processes (pi 4 CPU has 4 cores) per node cd $HOME/Projects mpirun -n 4 -hostfile 4bmachinelist python3 prime_mpi.py mpirun -n 8 -hostfile 4bmachinelist python3 prime_mpi.py mpirun -n 12 -hostfile 4bmachinelist python3 prime_mpi.py mpirun -n 16 -hostfile 4bmachinelist python3 prime_mpi.py #On each node, launch 3 and 4 processes, and time their differences cd $HOME/Projects time mpirun -N 3 -hostfile 4bmachinelist python3 prime_mpi.py time mpirun -N 4 -hostfile 4bmachinelist python3 prime_mpi.py # Please post your results of Test06 here in the comment ##Test07 # Collective communication using the scatter function example cd ~/Projects curl -L https://pythonprogramming.net/scatter-gather-mpi-mpi4py-tutorial/ | grep -A13 -B1 "from mpi4py" | sed '1d' > sct9.py time mpirun -N 4 -hostfile 4bmachinelist python sct9.py ##Test08 # Collective communication using the gather function example cd ~/Projects curl -L https://pythonprogramming.net/mpi-gather-command-mpi4py-python/ | grep -A19 -B1 "from mpi4py" | sed '1d' > sct10.py time mpirun -N 4 -hostfile 4bmachinelist python sct10.py ##Test09 # mpicc examples cd $HOME/Projects curl -L https://github.com/wesleykendall/mpitutorial/archive/gh-pages.zip -o mpitutorial.zip unzip mpitutorial.zip cd mpitutorial-gh-pages/tutorials/mpi-reduce-and-allreduce/code make time mpirun -N 4 -hostfile $HOME/Projects/4bmachinelist reduce_avg 100000000 time mpirun -N 4 -hostfile $HOME/Projects/4bmachinelist reduce_stddev 100000000


Optional Server or services setup

shell script    Select all
# Nginx (a lightweght webserver) and fast php plugin sudo apt-get install nginx cd; ln -s /usr/share/nginx/html . # check ip address and use browser connect to test web server hostname -I ifconfig eth0 ifconfig wlan0 # for wireless lan ip addr | grep -Po '(?!(inet 127.\d.\d.1))(inet \K(\d{1,3}\.){3}\d{1,3})' # install php in nginx sudo apt install php-fpm php-curl php-gd php-cli php7.3-opcache php-mbstring php-xml php-zip # link the html folder to home cd $HOME sudo chown pi:pi /var/www/html ln -sf /var/www/html . # create testing php page cat > ~/html/info.php <<EOF <?php phpinfo(); ?> EOF # add in /etc/php/7.3/fpm/pool.d/www.conf user = pi group = pi # enable php in nginx and edit this file sudo vi /etc/nginx/sites-enabled/default # and change or add the followings in server section: server { ... # Add index.php to the list if you are using PHP index index.html index.htm index.php index.nginx-debian.html; ... ## Begin - PHP location ~ \.php$ { # Choose either a socket or TCP/IP address fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; # fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } ## End - PHP ## Begin - Security # deny all direct access for these folders location ~* /(.git|cache|bin|logs|backups|tests)/.*$ { return 403; } # deny running scripts inside core system folders location ~* /(system|vendor)/.*\.(txt|xml|md|html|yaml|php|pl|py|cgi|twig|sh|bat)$ { return 403; } # deny running scripts inside user folder location ~* /user/.*\.(txt|md|yaml|php|pl|py|cgi|twig|sh|bat)$ { return 403; } # deny access to specific files in the root folder location ~ /(LICENSE.txt|composer.lock|composer.json|nginx.conf|web.config|htaccess.txt|\.htaccess) { return 403; } ## End - Security ... } # # see instructions for php7 here -> https://getgrav.org/blog/raspberrypi-nginx-php7-dev # reload web server and test # check to ensure the /var/run/php/php7.3-fpm.sock file exists sudo service nginx restart sudo service php7.3-fpm restart ls -l /var/run/php/php7.3-fpm.sock # Use this command to check whether the web server is working or not curl -L http://127.0.0.1/ curl -L http://127.0.0.1/info.php # python cgi plugin for nginx see here # install minidlna as a media server sudo apt install minidlna # edit /etc/minidlna.conf and add the followings media_dir=V,/media/RAID0WD/MyMovie friendly_name=MyMovie # edit /etc/default/minidlna and add the followings USER="root" GROUP="root" # reload minidlna sudo service minidlna restart sudo service minidlna force-reload # see what network services are working on raspberry pi sudo netstat -ntlp # free ddns no-ip.com Free sign-up and have 3 Hostnames but need to Confirm Every 30 Days # After sign-up, manual update of IP address # Use this command to obtain your public IP address and update on their website host myip.opendns.com resolver1.opendns.com | grep myip # and then download and Install the dynamic update client for Linux https://www.noip.com/support/knowledgebase/installing-the-linux-dynamic-update-client/ You can have 3 host names and have to install virtual host in nginx web server. In server settings of /etc/nginx/sites-available/default, use this settings to map to different html subfolders for different hosts server { ... server_name ~^(.*)\.(.*)\.(.*)$; set $host_name $1; set $subdomain_name $2; set $domain_name $3; root /var/www/html/$host_name.$subdomain_name.$domain_name; ... } # Suppose myhostname1, myhostname2 and myhostname3 are the hostnames obtained from no-ip.com # Setup html root for mutli-hosts as above settings in nginx sudo chown -R pi:pi /var/www/html mkdir -p /var/www/html/10.0.1.101 mkdir -p /var/www/html/127.0.0.1 mkdir -p /var/www/html/myhostname1.ddns.net mkdir -p /var/www/html/myhostname2.ddns.net mkdir -p /var/www/html/myhoatname3.ddns.net # Restart web server to be effective sudo service nginx restart sudo service php7.3-fpm restart # Test web server after restart services cd /var/www/html/ cp index.nginx-debian.html 10.0.1.101 curl -L http://10.0.1.101/ cd /var/www/html/ cp info.php 127.0.0.1/ curl -L http://127.0.0.1/info.php # Test webserver from ddns cd /var/www/html/myhostname1.ddns.net curl -L https://getgrav.org/blog/raspberrypi-nginx-php7-dev -o index.html # make sure your hone router tcp port 80 has been forwarded to your internal Pi host and test with curl -L http://myhostname1.ddns.net/ # And also test the webserver from the browser on Phone # How to access the server and nodes from Android Phone # Recommend Termux from Google Play Store # It use the Volume Up key + keyboard to enter special control characters and can install packages # Reference : https://wiki.termux.com/wiki/Touch_Keyboard apt update apt upgrade # Install ssh and login server apt install openssh ssh-copy-id pi@10.0.1.101 ssh pi@10.0.1.101 # Assume ddns is setup ssh-copy-id pi@myhostname.ddns.net ssh pi@myhostname.ddns.net # Install python 3 apt search python apt install python # To improve command line history productivity, please refer Terminal history usage tips : https://www.howtogeek.com/howto/44997/how-to-use-bash-history-to-improve-your-command-line-productivity/amp/ # ssh forwarding from Android Device (better to have Android tablet with keyboard and mouse) # Reference: https://wiki.termux.com/wiki/Main_Page # Need to install VNC Viewer and Termux from Google Play Store # Install and start vncserver in Termux vncserver -localhost export DISPLAY=":1" # ssh forwarding and login pi ssh -Y pi@10.0.1.104 # install and run jupyter-notebook from pi (after installation of tensorflow) pip3 install jupyter jupyter-notebook & # run the juypter notebook example from https://colab.research.google.com/github/lmoroney/io19/blob/master/Zero%20to%20Hero/Rock-Paper-Scissors.ipynb # After session ended and kill vncserver in Termux vncserver -kill :1 # free ssl certificate for web server https://letsencrypt.org/getting-started/ # After you have your ddns host and nginx running on your Pi, follow the certbot instructions here # Reference : https://certbot.eff.org/lets-encrypt/debianbuster-nginx # set up certbot and obtain ssl certifcate for nginx sudo apt-get install certbot python-certbot-nginx sudo certbot --nginx crontab -e # add this entry to automate letencrypt certificate renewal 43 6 * * * certbot renew --renew-hook "systemctl reload nginx" Certificate and chain have been saved at: /etc/letsencrypt/live/myhostname1.ddns.net/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/myhostname1.ddns.net/privkey.pem Your cert will expire on (90 days after). To obtain a new or tweaked version of this certificate in the future, simply run certbot again with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew" sudo certbot certonly sudo certbot certificates sudo systemctl reload nginx # make sure your home router tcp port 443 has been forwarded to your internal Pi host and test with curl -L https://myhostname1.ddns.net/ # Install OpenVPN Server After you have the ddns hostname or fixed IP address, setup OpenVPN as per instructions here # Reference : https://www.pcmag.com/how-to/how-to-create-a-vpn-server-with-raspberry-pi curl -L https://install.pivpn.io | bash # Choose OpenVPN and use udp port 1194 # Also need to assign fixed IP to your Pi # After installation, sudo reboot to be effective # forward udp port 1194 to the internal IP address of your Pi # Client access # Create configuration file e.g. iphone, macbook etc. pivpn add # Client access Use "OpenVPN Connect App" for phone and notebook send the profile to the client by email and import, e.g. for iPhone # Makeuse of X11 forwarding to run graphical applications see https://kb.iu.edu/d/bdnt For macOS High Sierra or above, please see this guide to install xQuartz. https://www.unixtutorial.org/get-x11-forwarding-in-macos-high-sierra/
You can install it from https://www.xquartz.org or via "sudo port -v install xorg" in the terminal. For In Windows 10 WSL, and install XServer in Windows 10 such as xming -> https://sourceforge.net/projects/xming/ and set DISPLAT for WSL x11-apps export DISPLAY=localhost:0.0 export DISPLAY=:0 For Linux, there is built-in support. In Terminal, type ssh -Y pi@10.0.1.101 # After login Raspberry pi sudo apt-get install idle3 idle & # run scratch sudo apt-get install scratch scratch & # run codeblocks sudo apt install codeblocks codeblocks & If you get "cannot open display error", see discussions here. https://superuser.com/questions/310197/how-do-i-fix-a-cannot-open-display-error-when-opening-an-x-program-after-sshi # run browser chromium-browser & # Access Raspberry Pi Desktop Remotely Use Windows Remote Desktop Client to connect to the Raspberry Pi. # Install xrdp and reboot the pi after installation sudo apt install xrdp sudo systemctl restart xrdp # For macOS, there is "Microsoft Remote Desktop Connection Client for Mac" in the Mac App Store. # setup cron job to backup project data # Reference : https://www.raspberrypi.org/documentation/linux/usage/cron.md Create a shell script e.g. cat > /home/pi/backup.sh <<EOF #!/bin/sh cd /media/RAID0WD; tar --exclude='./Projects/Downloads' -zcf Projects-backup/"Projects$(date '+%Y%m%d').tar.gz" Projects EOF chmod a+x /home/pi/backup.sh mkdir -p /media/RAID0WD/Projects-backup # add this entry in the crontab 0 0 * * * /home/pi/backup.sh #List crontab crontab -l # Overclock # Reference : https://www.seeedstudio.com/blog/2020/02/12/how-to-safely-overclock-your-raspberry-pi-4-to-2-147ghz/ #Install docker # Not for berryboot images curl -sSL https://get.docker.com | sh sudo usermod -aG docker $USER sudo newgrp docker sudo apt install libffi-dev libssl-dev python3 python3-pip sudo apt-get remove python-configparser sudo pip3 -v install docker-compose # test docker hello-world docker run hello-world cd $HOME mkdir my-wordpress cd my-wordpress cat > docker-compose.yaml <<EOF version: '3.2' services: db: image: hypriot/rpi-mysql volumes: - "./.data/db:/var/lib/mysql" restart: always environment: MYSQL_ROOT_PASSWORD: wordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest links: - db ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_PASSWORD: wordpress EOF docker-compose up -d # then try browser http://localhost:8000/ # stop the docker-compose example docker-compose down # test nodejs cd $HOME git clone https://github.com/hypriot/rpi-node-haproxy-example cd rpi-node-haproxy-example docker-compose up curl http://localhost:80 curl http://localhost:70 docker-compose stop # test nodejs + mongodb cd $HOME git clone https://github.com/hagaik/easy-node-authentication.git cd easy-node-authentication cat > config/database.js <<EOF // config/database.js module.exports = { 'url' : 'mongodb://mongo:27017' // looks like mongodb://<user>:<pass>@mongo.onmodulus.net:27017/Mikha4ot }; EOF cat > Dockerfile <<EOF FROM hypriot/rpi-node # Create app directory WORKDIR /usr/src/app # Install app dependencies COPY package*.json ./ RUN npm install # Copy app source code COPY . . #Expose port and start application EXPOSE 8080 CMD [ "npm", "start" ] EOF cat > docker-compose.yaml <<EOF version: "3.2" services: db: image: dhermanns/rpi-mongo volumes: - "data-volume:/data/db" restart: always ports: - "27017:27017" expose: - "27017" webm: build: . depends_on: - db ports: - "8080:8080" expose: - "8080" volumes: data-volume: EOF # build and start as daemon docker-compose up --build -d # then try browser http://localhost:8080/ # stop and down docker-compose down -v # move docker data-root to nfs mounted drive e.g. /mnt/docker-data sudo service docker stop cat << EOF | sudo tee -a /etc/docker/daemon.json { "storage-driver": "overlay", "data-root": "/mnt/docker-data" } EOF sudo rsync -aP /var/lib/docker/ /mnt/docker-data sudo mv /var/lib/docker /var/lib/docker.old sudo service docker start


Install tensorflow and horovod for Pi 4 cluster

shell script    Select all
Install all these packages for every nodes in the Pi 4 cluster in order to run horovod with tensorflow # Install tenorflow 2.1.0 # Reference https://qengineering.eu/install-tensorflow-2.1.0-on-raspberry-pi-4.html # or https://qengineering.eu/install-tensorflow-2.2.0-on-raspberry-pi-4.html for tenorflow 2.2.0 cd ~/Projects sudo apt-get install gfortran sudo apt-get install libhdf5-dev libc-ares-dev libeigen3-dev sudo apt-get install libatlas-base-dev libopenblas-dev libblas-dev sudo apt-get install liblapack-dev cython sudo pip3 install pybind11 sudo apt-get install python3-h5py # upgrade pip3 and check pip3 version curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3 get-pip.py --force-reinstall # pip 20.1.1 from /home/pi/.local/lib/python3.7/site-packages/pip (python 3.7) python3 --version pip3 --version # download the wheel wget https://github.com/Qengineering/Tensorflow-Raspberry-Pi/raw/master/tensorflow-2.1.0-cp37-cp37m-linux_armv7l.whl # install TensorFlow 2.1.0 python3 -m pip install --user tensorflow-2.1.0-cp37-cp37m-linux_armv7l.whl # test run examples as in how-to-install-tensorflow-with-gpu python3 -m pip install --user matplotlib python3 -m pip install --user pandas python3 -m pip install --user keras scikit-learn cd ~/Projects curl -L https://tinyurl.com/tensorflowwin | grep -A7 tftest.py | sed '1,2d' > tftest.py python3 tftest.py curl -L https://tinyurl.com/tensorflowwin | grep -A129 irislearn.py | sed '1,8d' > irislearn.py curl -L https://tinyurl.com/tensorflowwin | grep -A150 iris.data.nbsp | sed '1d' > iris.data python3 irislearn.py curl -L https://tinyurl.com/tensorflowwin | grep -A37 keraslearn.py | sed '1,3d' > keraslearn.py curl -L https://tinyurl.com/tensorflowwin | grep -A768 pima-indians-diabetes.data.nbsp | sed '1d' > pima-indians-diabetes.data python3 keraslearn.py # install horovod # https://github.com/horovod/horovod#install python3 -m pip install cffi>=1.4.0 cloudpickle python3 -m pip install horovod # Reboot to make it effective sudo reboot # First test run the simple hellohorovod.py cd ~/Projects cat > hellohorovod.py <<EOF from mpi4py import MPI import horovod.tensorflow as hvd # Split COMM_WORLD into subcommunicators subcomm = MPI.COMM_WORLD.Split(color=MPI.COMM_WORLD.rank % 2, key=MPI.COMM_WORLD.rank) # Initialize Horovod hvd.init(comm=subcomm) print('COMM_WORLD rank: %d, Name: %s, Horovod rank: %d' % (MPI.COMM_WORLD.rank, MPI.Get_processor_name(), hvd.rank())) EOF # run it with cd ~/Projects horovodrun -np 1 -H localhost:1,pi02:1,pi03:1,pi04:4 python3 hellohorovod.py # Then try run the tensorflow v2 example as below # please be warned that and watch out the temperature of PIs. # https://github.com/horovod/horovod/blob/master/examples/tensorflow2_keras_mnist.py # For non-keras, the sample is https://github.com/horovod/horovod/blob/master/examples/tensorflow2_mnist.py cd ~/Projects wget https://raw.githubusercontent.com/horovod/horovod/master/examples/tensorflow2_keras_mnist.py time horovodrun -np 4 -H localhost:1,pi02:1,pi03:1,pi04:1 python3 tensorflow2_keras_mnist.py time horovodrun -np 8 -H localhost:2,pi02:2,pi03:2,pi04:2 python3 tensorflow2_keras_mnist.py time horovodrun -np 12 -H localhost:3,pi02:3,pi03:3,pi04:3 python3 tensorflow2_keras_mnist.py # 1 slot per node has the faster performance for the Pi cluster # As with the release of the new 8GB Pi4B model, it is possible to increase the number of slots for these 8GB machines. # append self public key to self authorized list ssh pi@10.0.1.101 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys exit ssh pi@10.0.1.102 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys exit ssh pi@10.0.1.103 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys exit ssh pi@10.0.1.104 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys exit # create ~/Projects/myhostfile cat > ~/Projects/myhostfile <<EOF pi01 slots=1 pi02 slots=1 pi03 slots=1 pi04 slots=1 EOF # running in background with nohup cd ~/Projects nohup horovodrun -np 4 -hostfile myhostfile python3 tensorflow2_keras_mnist.py & cat ~/Projects/nohup.out exit # create folder in non-nfs ext4 partition folder if running in nodes other than pi01 mkdir -p ~/horovod ssh pi01 'mkdir -p ~/horovod' ssh pi02 'mkdir -p ~/horovod' ssh pi03 'mkdir -p ~/horovod' ssh pi04 'mkdir -p ~/horovod' # When running in pi02, pi03 and pi04, it cannot start in the nfs shared folder if it is not an ext4 partition. # e.g. when start in pi03 ssh pi03 cd ~/horovod horovodrun -np 4 -hostfile ~/Projects/myhostfile python3 ~/Projects/tensorflow2_keras_mnist.py # run it with horovod and time and redirect outputfile to keras_mnist.np4.out cd ~/horovod nohup bash -c 'time horovodrun -np 4 -hostfile ~/Projects/myhostfile3 python3 ~/Projects/tensorflow2_keras_mnist.py' &> keras_mnist.np4.out & # The time for number of nodes for this testing nohup bash -c 'time horovodrun -np 4 -H pi01:1,pi02:1,pi03:1,pi04:1 --output-filename logs python3 ~/Projects/tensorflow2_keras_mnist.py' &> nohup.out.np4_1111 & nohup bash -c 'time horovodrun -np 3 -H pi01:1,pi02:1,pi03:1 --output-filename logs python3 ~/Projects/tensorflow2_keras_mnist.py' &> nohup.out.np3_111 & nohup bash -c 'time horovodrun -np 2 -H pi01:1,pi02:1 --output-filename logs python3 ~/Projects/tensorflow2_keras_mnist.py' &> nohup.out.np2_11 & nohup bash -c 'time horovodrun -np 1 -H pi01:1 --output-filename logs python3 ~/Projects/tensorflow2_keras_mnist.py' &> nohup.out.np1_1 & nohup.out.np4_1111:real not yet done (may be 50m) nohup.out.np3_111:real 88m13.692s nohup.out.np2_11:real 129m11.322s nohup.out.np1_1:real 207m28.119s


Some shortcuts

shell script    Select all
# Offending ECDSA key # Offending ECDSA key in $HOME/.ssh/known_hosts:5 # For Mac sed sed -i '' '5d' $HOME/.ssh/known_hosts # For GNU sed sed -i '5d' $HOME/.ssh/known_hosts # print line 5 to 6 of a text file sed -n '5,6p' $HOME/.ssh/known_hosts # print line 1 and Line 5 to 6 of a text file sed -n -e '1p' -e '5,6p' $HOME/.ssh/known_hosts # add all the hosts to the ~/.ssh/known_hosts file using ssh-keyscan # first login pi01 10.0.1.101 ssh pi@10.0.1.101 ssh-keyscan -t rsa,dsa pi02,pi01,pi04 > ~/.ssh/known_hosts






Saturday, May 16, 2020

Running python cgi scripts on the Raspberry Pi nginx

Basically, the setup of python plugin cgi is here at https://www.takaitra.com/running-python-cgi-scripts-on-the-raspberry-pi/
and the enhanced functions of uwsgi-cgi documentation here https://uwsgi-docs.readthedocs.io/en/latest/CGI.html
Except the followings:

# Build and install the uwsgi with the cgi plugin
wget https://projects.unbit.it/downloads/uwsgi-latest.tar.gz
tar zxvf uwsgi-latest.tar.gz 
cd uwsgi-2.0.18
# compile as cgi plugin
make PROFILE=cgi
sudo cp uwsgi /usr/local/bin/



# Create the file /etc/uwsgi.ini
plugins = cgi
# change to unix sock
socket = /tmp/uwsgi.sock
#socket = 127.0.0.1:9000
module = pyindex
cgi = /var/www/html/cgi-bin
#cgi = /usr/share/nginx/www
cgi-allowed-ext = .py
cgi-helper = .py=python
logger=file:/tmp/uwsgi-error.log
uid = www-data
gid = www-data



# Add a location to the /etc/nginx/sites-available/default
location ~ \.py$ {
  # uwsgi_pass 127.0.0.1:9000;
  # change to unix sock
  uwsgi_pass unix:/tmp/uwsgi.sock;
  include uwsgi_params;
  uwsgi_modifier1 9;
}


Test this python script to show the temperature of Raspberry Pi in a html web page

/var/www/html/cgi-bin/temp.py    Select all
#!/usr/bin/env python import os # Return CPU temperature as a character string def getCPUtemperature(): res = os.popen('vcgencmd measure_temp').readline() return(res.replace("temp=","").replace("'C\n","")) #We have to print a valid HTTP header first so the browser will know how to decode the data print "Content-type: text/html\n\n" temp1=getCPUtemperature() print temp1


/var/www/html/temp.html    Select all
<html> <head> <title>Pi Temp</title> <script src="http://code.jquery.com/jquery-1.10.1.min.js"></script> </head> <body> <h1>Temp from Pi</h1> <script> $(document).ready(function () { var interval = 500; //number of milli seconds between each call var refresh = function() { $.ajax({ url: "temp.py", cache: false, success: function(html) { $('#pi-temp-here').html(html); setTimeout(function() { refresh(); }, interval); } }); }; refresh(); }); </script> <div id="pi-temp-here"></div> </body> </html>


// sed 's/<\([^>]*\)>/\<\1\>/g;'


Shell script    Select all
# Append video Group to www-date user sudo usermod -aG video www-data # reboot the Raspberry Pi and test the python cgi script sudo reboot http://127.0.0.1/temp.html




To install FastCGI for php in ngnix, please follow this guide -> https://getgrav.org/blog/raspberrypi-nginx-php7-dev

If you use buster, it will install the latest php-7.3, so change everything from 7.2 from this guide to 7.3 and the installation of packages will be
sudo apt-get update
sudo apt-get install php php-curl php-gd php-fpm php-cli php-opcache php-mbstring php-xml php-zip


#add in /etc/php/7.3/fpm/pool.d/www.conf
user = pi
group = pi


#reload web server and test
#check to ensure the /var/run/php/php7.3-fpm.sock file exists
sudo service nginx restart
sudo service php7.3-fpm restart




Friday, May 8, 2020

How to build tensorflow in Arch Linux

1) The instructions to build tensorflow is here https://www.tensorflow.org/install/source#ubuntu

2) It is important to use the same GCC version and Bazel version for the compilation work as listed in the guide. E.g. to build tensorflow 2.0, use the correct version in order to avoid failure.
tensorflow-2.0.0 2.7, 3.3-3.7 GCC 7.3.1 Bazel 0.26.1

3) As the compilation is very cpu intensive and may took several hours, it is also important to prepare a faster and with more cores machine for this job and avoid to use docker image.

4) The steps are as below and need to setup the Arch Linux and setup the wifi connection as in my previous post.
history.txt    Select all
Assume running as root
Connect to Wifi and check status
001 netctl list 002 netctl start wlan0-MyNetwork 003 ifconfig wlan0 Change to bash shell
005 chsh -s /bin/bash 006 exec bash Download GCC 7.3.1 and libs from
https://archive.org/download/archlinux_pkg_gcc7-libs https://archive.org/download/archlinux_pkg_gcc7/ Install GCC 7.3.1 and other dependencies
010 pacman -Sy 011 pacman -U --noconfirm https://archive.org/download/archlinux_pkg_gcc7/gcc7-7.3.1%2B20180406-2-x86_64.pkg.tar.xz https://archive.org/download/archlinux_pkg_gcc7-libs/gcc7-libs-7.3.1%2B20180406-2-x86_64.pkg.tar.xz 012 ln -sf /usr/bin/gcc-7 /usr/local/bin/gcc 013 ln -sf /usr/bin/g++-7 /usr/local/bin/g++ 014 ln -sf /usr/bin/cc-7 /usr/local/bin/cc 015 ln -sf /usr/bin/cpp-7 /usr/local/bin/cpp 016 ln -sf /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/cc1 /usr/local/lib/. 017 ln -sf /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/cc1plus /usr/local/lib/. 018 hash -r 019 gcc --version Download tensorflow code and checkout r2.0
020 cd ~ 021 pacman -S --noconfirm git patch 022 git clone https://github.com/tensorflow/tensorflow.git 023 cd tensorflow/ 024 git checkout r2.0 Check the Bazel version requirement from the tensorflow code
025 grep TF_M[AI][XN] configure.py Download Bazel Installer from https://github.com/bazelbuild/bazel/releases?after=0.27.1
026 cd ~ 027 curl -OL https://github.com/bazelbuild/bazel/releases/download/0.26.1/bazel-0.26.1-installer-linux-x86_64.sh Install Bazel 0.26.1
028 pacman -S --noconfirm unzip which curl 029 /bin/bash bazel-0.26.1-installer-linux-x86_64.sh 030 source /usr/local/lib/bazel/bin/bazel-complete.bash 031 bazel version Download and Install Anaconda python from https://repo.anaconda.com/archive/
037 cd ~ 038 curl -OL https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh Ensure using bash shell
042 echo $0 043 /bin/bash Anaconda3-2020.02-Linux-x86_64.sh 044 exec bash Set up Anaconda and install the required python packages in virtual env
050 conda config --set auto_activate_base false 051 conda deactivate 052 conda update conda 053 conda update anaconda 054 conda update python 055 conda update --all 056 conda create --name tf-py37 057 conda activate tf-py37 058 conda install pip six numpy wheel setuptools mock 'future>=0.17.1' python=3.7 059 conda install protobuf==3.6.1 --no-deps 060 pip install keras_applications --no-deps 061 pip install keras_preprocessing --no-deps Build tensorflow
062 cd ~ 063 cd tensorflow 064 git checkout r2.0 065 ./configure 070 bazel shutdown && bazel clean 071 bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package There might be error for gettid in grpc, and need to download patch and apply it. It is possible to press control-alt-F2 (mac is Control-fn-option-F2) to start another terminal while bazel building in progress.
080 cd ~ 081 curl -OL https://gist.githubusercontent.com/drscotthawley/8eb51af1b4c92c4f18432cb045698af7/raw/a4aaa020f0434c58b08e453e6d501553ceafbc81/grpc.patch 082 patch -p2 --directory='tensorflow/bazel-tensorflow/external/grpc/src' < ~/grpc.patch To review the patch before and after
085 grep -RIH 'gettid' --include="*.cc" - --exclude-dir={git,log,assets} ~/.cache/bazel/*/*/external/grpc 086 grep -RIH 'sys_gettid' --include="*.cc" --exclude-dir={git,log,assets} ~/.cache/bazel/*/*/external/grpc Alternatively, to do the patch using sed, but use with care
087 grep -RIl 'gettid' --include="*.cc" --exclude-dir={git,log,assets} ~/.cache/bazel/*/*/external/grpc | xargs sed -i 's/gettid/sys_gettid/g' Continue building the package
083 cd tensorflow 084 bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package 085 bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg Shell script utilities
100 curl -L https://tinyurl.com/BuildTensorflow | grep " [0-1][0-9][0-9] " > ~/history.txt 101 source <(grep -m 1 " 011 " ~/history.txt | cut -c8-) 102 source <(grep " 01[2-9] " ~/history.txt | cut -c8-) 103 history -ps $(grep -m 1 " 017 " ~/history.txt | cut -c8-)


5) How to use this script in Arch Linux, first create a history.txt with the content in step 4, use scp or samba to copy the history.txt to the Arch Linux environment.
Or simply download this post html page to the environment.
curl -L https://tinyurl.com/BuildTensorflow | grep " [0-1][0-9][0-9] " > ~/history.txt
or with comments
curl -L https://tinyurl.com/BuildTensorflow | grep -A105 ">Assume running as root" | sed -e 's/<[^>]*>//g' > ~/history.txt
shell script    Select all
Run the command as in line 011 of the history.txt file source <(grep -m 1 " 011 " ~/history.txt | cut -c8-)

Run several commands as in line 012 to Line 019 of the history.txt file source <(grep " 01[2-9] " ~/history.txt | cut -c8-)

Add line 017 of history.txt to bash history without execution
so as to use arrow up to edit the command history before execution.
history -ps $(grep -m 1 " 017 " ~/history.txt | cut -c8-)





Sunday, May 3, 2020

How to install Arch Linux on a multiboot USB stick with persistent storage

This post follows the previous ones and add Arch Linux to the USB stick and with persistent storage in the removable USB partition.

1) Download the iso file and copy to the ISO folder of the partition. The iso file can be downloaded from https://www.archlinux.org/download/

2) Create the partition required for the persistent ext4 storage say /dev/sda4 and find out the UUIDs of the ISO partition and also the persistent partition. In Arch Linux, the persistent partition is called cow_device.

3) Manually, add the menuentry in grub.cfg for Arch Linux as below
grub.cfg    Select all
menuentry "Arch Linux Persistent" { set isofile="/archlinux-2020.05.01-x86_64.iso" # In Linux, you can find UUID by using : sudo blkid /dev/sda? set uuid="XXXX-XXXX" set imgdevpath="/dev/disk/by-uuid/$uuid" # search --fs-uuid --no-floppy --set=isopart $uuid loopback loop (${imgdevpath})${isofile} linux (loop)/arch/boot/x86_64/vmlinuz nomodeset loglevel=0 img_dev=$imgdevpath img_loop=$isofile earlymodules=loop cow_device=/dev/disk/by-uuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX rw persistent quiet splash initrd (loop)/arch/boot/intel_ucode.img (loop)/arch/boot/amd_ucode.img (loop)/arch/boot/x86_64/archiso.img }


4) After boot up, you will have the minimal terminal login in Linux and no GUI desktop. The first setup is probably to setup the wifi connection, if there is no Ethernet connection. Please follow this guide for the required wifi setup. http://www.linuxandubuntu.com/home/how-to-setup-a-wifi-in-arch-linux-using-terminal

5) There is also a txt file install.txt to refer what to do after the internet connect setup. Or probably want to install Anaconda and tenorflow as in previous post. Please change shell to bash before installation of Anaconda in Arch Linux

6) Tips for Arch Linux
change to bash shell before installing Anaconda : chsh -s /bin/bash; exit; echo $0
Terminal shortcuts and usage tips : https://linuxhandbook.com/linux-shortcuts/
and https://www.howtogeek.com/howto/44997/how-to-use-bash-history-to-improve-your-command-line-productivity/amp/
Display full history : history 0 | less
Build-essential package in Arch Linux: pacman -S base-devel
To install git: pacman -S git

7) As the bare-bones terminal has no mouse or trackpad working on MacBook, and in order to scroll the terminal text, press Fn-Shift-Up or Down.



Wednesday, April 29, 2020

How to install tensorflow gpu for Ubuntu 20.04 or Mac OSX

1. Boot into Ubuntu 20.04 or Ubuntu 18.04

2. Download Anaconda3-2020.02-Linux-x86_64.sh from https://repo.anaconda.com/archive/

3. Start installation in Terminal

shell script    Select all
wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh /bin/sh Anaconda3-2020.02-Linux-x86_64.sh # After completion of installation, and accept init exit


4. Start Terminal App again

5. Update Anaconda, create virtual environment and install packages
shell script    Select all
conda config --set auto_activate_base false conda deactivate # Exit conda shell conda update conda conda update anaconda conda update python conda update --all mkdir -p $HOME/Projects cd $HOME/Projects/ conda create --name tf-gpu conda activate tf-gpu # Now in (tf-gpu) shell conda install tensorflow-gpu conda install matplotlib pandas scikit-learn conda install keras


6. Run the examples in how-to-install-tensorflow-with-gpu

or run the TensorFlow2 tutorial
conda activate tf-gpu
git clone https://github.com/lambdal/TensorFlow2-tutorial.git
cd TensorFlow2-tutorial/01-basic-image-classification/
python resnet_cifar.py


7. It is not necessary to have the required GPU hardware in order to install the gpu package, For machine without the required GPU, just first install tensorflow_gpu package and then download and install the tensorflow wheels file here
pip install --upgrade ~/Downloads/tensorflow-2.1.0-cp37-cp37m-linux_x86_64.whl
, in order to optimise for CPU with AVX, AVX2, and FMA or to build from source here -> https://www.tensorflow.org/install/source



Install tensorflow-mkl for Mac OS X


The installation steps are very similar, except
Download and install Anaconda3-2020.02-MacOSX-x86_64.pkg
curl -OL https://repo.anaconda.com/archive/Anaconda3-2020.02-MacOSX-x86_64.pkg
And then follow step 4 to step 6 above. In step 5, additionally, install tensorflow-mkl from anaconda channel. tensorflow-mkl is optimized with Intel® MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA (Specifically, Intel MKL-DNN is optimized for Intel® Xeon® processors and Intel® Xeon Phi™ processors). Package installation steps are
conda install tensorflow
conda install -c anaconda tensorflow-mkl
conda install matplotlib pandas scikit-learn keras



Tuesday, April 28, 2020

How to have Multiple persistent for Ubuntu

The previous post discuss to have dual boot on Windows 10 and Ubuntu with persistent partition named "casper-rw". This post is about adding one Ubuntu OS and allow multiple persistent Ubuntu.

Method One solution to have multiple persistent for Ubuntu is to use persistent-path in grub menuentry.

There are limitations for persistent-path solution.

1.1) persistent-path should be a folder in FAT32 partition which also resides the ubuntu iso file. The persistent partition "casper-rw" should be ext4 partition.

1.2) There should be file system casper-rw and home-rw and has a size limitation of maximum 4G each. The persistent partition "casper-rw" does not.

1.3) To create these 2 files, first boot up ubuntu in non-persistent live session
shell script    Select all
# assume the Ubuntu iso file is mounted in Ubuntu in /isodevice and is a FAT32 partition sudo mkdir -p /isodevice/persist1/ cd /isodevice/persist1/ # create casper-rw say 1G and format as an ext file system sudo dd if=/dev/zero of=casper-rw bs=1M count=1024 sudo mkfs.ext4 -L casper-rw -F casper-rw # home-rw usually needs a larger size say 2G sudo dd if=/dev/zero of=home-rw bs=1M count=2046 sudo mkfs.ext4 -L home-rw -F home-rw # create another persist folder sudo mkdir -p /isodevice/persist2/ cd /isodevice/persist2/ # create casper-rw say 1G and format as an ext file system sudo dd if=/dev/zero of=casper-rw bs=1M count=1024 sudo mkfs.ext4 -L casper-rw -F casper-rw # home-rw usually needs a larger size say 2G sudo dd if=/dev/zero of=home-rw bs=1M count=2046 sudo mkfs.ext4 -L home-rw -F home-rw


1.4) The menuentry for both persistent-path and persistent partition are shown below

sudo nano /media/ubuntu/0000-????/grub/grub.cfg    Select all
# menuentry for persistent-path persist1 for Ubuntu 18.04 menuentry "Ubuntu 18.04 persistent-path=/persist1" { set isofile="/ubuntu-18.04.4.iso" # In Linux, you can find UUID by using : sudo blkid /dev/sda? set uuid="XXXX-XXXX" search --fs-uuid --no-floppy --set=root $uuid loopback loop (${root})${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} persistent persistent-path=/persist1 quiet splash initrd (loop)/casper/initrd } # menuentry for persistent-path persist1 for Ubuntu 16.04 menuentry "Ubuntu 16.04 persistent-path=/persist2" { set isofile="/ubuntu-16.04.6.iso" # In Linux, you can find UUID by using : sudo blkid /dev/sda? set uuid="XXXX-XXXX" search --fs-uuid --no-floppy --set=root $uuid loopback loop (${root})${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} persistent persistent-path=/persist2 quiet splash initrd (loop)/casper/initrd } # menuentry for persistent partition menuentry "Ubuntu 18.04 persistent partition" { set isofile="/ubuntu-18.04.4.iso" insmod ntfs # In Linux, you can find UUID by using : sudo blkid /dev/sda? # set uuid="0000-XXXX" # search --fs-uuid --no-floppy --set=root $uuid # loopback loop (${root})${isofile} # (hd0,3) means /dev/sda3 change it to (hd0,2) or (hd0,4) as necessary # This should be the partition of the ubuntu ISO loopback loop (hd0,3)${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} persistent quiet splash initrd (loop)/casper/initrd }


1.5) This persistent-path solution currently does not support the new Ubuntu 19.10 and 20.04 desktop images. Maybe it is still a bug to be solved.




Method Two solution is to create two ext4 partitions casper-rw1 and casper-rw2 so that there is no 4G size limit for method one, two menuentry and OSs and by using script file to rename and swap the partition name to casper-rw before working on the required OS. This method two is also suitable for testing the new Ubuntu 20.04 desktop image.

2.1) Assume, the ext4 partitions are formatted in the USB sticks as
/dev/sda5 casper-rw1 partition with size 10G is for Ubuntu 18.04 persistent
/dev/sda6 casper-rw2 partition with size 10G is for Ubuntu 20.04 persistent


2.2) The menuentry section in grub.cfg has labels showing the existing RW1-OK active persistent partition and other entries
grub.cfg    Select all
menuentry "Ubuntu 18.04 persistent partition RW1-OK" { set isofile="/ubuntu-18.04.4.iso" # In Linux, you can find UUID by using : sudo blkid /dev/sda? set uuid="XXXXXXXXXXXXXXXX" search --fs-uuid --no-floppy --set=root $uuid loopback loop (${root})${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} file=/cdrom/preseed/ubuntu.seed persistent quiet splash --- initrd (loop)/casper/initrd } menuentry "Ubuntu 20.04 persistent partition RW2-NO" { set isofile="/ubuntu-20.04.iso" set uuid="XXXXXXXXXXXXXXXX" search --fs-uuid --no-floppy --set=root $uuid # In Linux, you can find UUID by using : sudo blkid /dev/sda? loopback loop (${root})${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} file=/cdrom/preseed/ubuntu.seed persistent quiet splash --- initrd (loop)/casper/initrd } menuentry "Ubuntu 18.04 non-persistent Live Session" { set isofile="/ubuntu-18.04.4.iso" # In Linux, you can find UUID by using : sudo blkid /dev/sda? set uuid="XXXXXXXXXXXXXXXX" search --fs-uuid --no-floppy --set=root $uuid loopback loop (${root})${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} file=/cdrom/preseed/ubuntu.seed quiet splash --- initrd (loop)/casper/initrd }


2.3) Create the following shell scripts in /isodrive/
rw1.sh    Select all
#!/bin/sh # change to ubuntu 18.04 RW1-OK with /dev/sda5 name as "casper-rw" sudo umount /dev/sda6 && sudo e2label /dev/sda6 "casper-rw2" sudo umount /dev/sda5 && sudo e2label /dev/sda5 "casper-rw" sed -i 's/\(\sRW1-NO\)\(.\{3,4\}$\)/ RW1-OK\2/; s/\(\sRW2-OK\)\(.\{3,4\}$\)/ RW2-NO\2/;' /media/ubuntu/0000-????/grub/grub.cfg grep 'RW[12].\{3,3\}" {' /media/ubuntu/0000-????/grub/grub.cfg

rw2.sh    Select all
#!/bin/sh # change to ubuntu 20.04 RW2-OK with /dev/sda6 name as "casper-rw" sudo umount /dev/sda5 && sudo e2label /dev/sda5 "casper-rw1" sudo umount /dev/sda6 && sudo e2label /dev/sda6 "casper-rw" sed -i 's/\(\sRW1-OK\)\(.\{3,4\}$\)/ RW1-NO\2/; s/\(\sRW2-NO\)\(.\{3,4\}$\)/ RW2-OK\2/;' /media/ubuntu/0000-????/grub/grub.cfg grep 'RW[12].\{3,3\}" {' /media/ubuntu/0000-????/grub/grub.cfg



2.4) Boot into non-persistent live session and run the shell script sudo /bin/sh rw1.sh so that it will rename the /dev/sda5 partition to casper-rw and also adjust the grub menu entry name to indicate which partition Ubuntu 18.04 RW1-OK is active.

2.5) Reboot into grub menu and choose the Ubuntu 18.04 OS RW1-OK to run


2.6) Repeat step 2.4 and boot into non-persistent live session and run sudo /bin/sh rw2.sh & step 2.5 to Ubuntu 20.04 RW2-OK if working on another OS is needed.


Wednesday, April 22, 2020

How to install Dual Boot Windows 10 and Ubuntu 18.04 on external USB Stick

This blog post follow up the previous post on how to Install Windows 10 on External SSD HD and will demo adding the Ubuntu 18.04 iso and use GRUB2 to dual UEFI boot Windows 10 and Ubuntu 18.04 on one external USB stick or drive.

1. The first step is to use Win2USB to install Windows 10 on external USB and setup UEFI partition and finish Windows driver e.g. Boot Camp driver and for the USB stick successfully.
The necessary software downloads links are in previous post

2. Use IM Magic Partition Resizer (Windows software) to shrink the USB stick Win2USB partition so that it can install Ubuntu iso and persistence data. This partition action can be performed in the booted up Windows USB stick, however, it is more safe to use another Windows machine. Create one FAT32 partition to keep the Ubuntu 18.04 iso file (say 10Gb in case to add more later) in Windows. Another 2 partitions for the Ubuntu ext4 say 50Gb each, which will be done when booting into Ubuntu non-persistence later. After that download the Ubuntu 18.04 desktop image to the new partition that was created, the file ubuntu-18.04.4-desktop-amd64.iso is to be renamed as ubuntu-18.04.4.iso

3. Download and Install grub-2.02-for-windows (not the latest version) in another Windows machine that is not the usb stick which is going to add installation of Ubuntu

4. Assume E drive is the UEFI partition for the USB Stick plugged in. Not the WinToUSB partition that has been installed Windows 10.

5. Install Grub2 for Windows (command reference from https://www.aioboot.com/en/install-grub2-from-windows/) amd Run Command Prompt under Admin
cd C:\Users\%USERNAME%\Downloads\grub-2.02-for-windows\grub-2.02-for-windows
grub-install.exe --boot-directory=E:\ --efi-directory=E: --removable --target=x86_64-efi

6. Create E:\grub\grub.cfg plus menuentry as below

notepad E:\grub\grub.cfg    Select all
# # DO NOT EDIT THIS FILE # # It is automatically generated by grub2-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else font="${prefix}/fonts/unicode.pf2" fi #larger font size for Retina Display #font="${prefix}/fonts/DejaVuSansMono.pf2" if loadfont $font ; then #set gfxmode=1024x768x32,1024x768x24,1024x768x16,1024x768,auto #set gfxpayload=keep load_video insmod gfxterm #insmod png terminal_output gfxterm #background_image -m stretch $prefix/themes/splash.png fi if [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=15 # Fallback normal timeout code in case the timeout_style feature is # unavailable. else set timeout=15 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### menuentry "Windows 10 Pro Boot Camp" { # UEFI - MBR method # set uuid="0000-XXXX" # search --fs-uuid --no-floppy --set=root $uuid # In Linux, you can find UUID by using : sudo blkid /dev/sda? # (hd0,1) means /dev/sda1 change it to (hd0,2) or (hd0,3) as necessary # This should be the partition of System Boot Partition set root=(hd0,1) chainloader (${root})/EFI/Microsoft/Boot/bootmgfw.efi } menuentry "Ubuntu 18.04 persistent" { set isofile="/ubuntu-18.04.4.iso" insmod ntfs # In Linux, you can find UUID by using : sudo blkid /dev/sda? # set uuid="0000-XXXX" # search --fs-uuid --no-floppy --set=root $uuid # loopback loop (${root})${isofile} # (hd0,3) means /dev/sda3 change it to (hd0,2) or (hd0,4) as necessary # This should be the partition of the ubuntu ISO loopback loop (hd0,3)${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} persistent quiet splash initrd (loop)/casper/initrd } menuentry "Ubuntu 18.04 non-persistent Live Session" { set isofile="/ubuntu-18.04.4.iso" insmod ntfs # In Linux, you can find UUID by using : sudo blkid /dev/sda? # set uuid="0000-XXXX" # search --fs-uuid --no-floppy --set=root $uuid # loopback loop (${root})${isofile} # (hd0,3) means /dev/sda3 change it to (hd0,2) or (hd0,4) as necessary # This should be the partition of the ubuntu ISO loopback loop (hd0,3)${isofile} linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} quiet splash initrd (loop)/casper/initrd } menuentry "Restart" { reboot } menuentry "Power Off" { halt }


7. Boot into the USB, running on non-persistence and create the partitions for Ubuntu persistence

shell script    Select all
# Running on Ubuntu non-persistence sudo fdisk -l #assume /dev/sda is the USB stick sudo fdisk /dev/sda # (n) (p) (4) to recreate primary partition 4 for Linux in fdisk # (n) (5) to recreate logical partition 5 for Linux in fdisk # (w) to write to partition table and quit fdisk # Reboot to let partition table effective


8. Boot into the USB again, running on non-persistence and create format the partitions for Ubuntu persistence

shell script    Select all
# Running on Ubuntu non-persistence # check the disk device and assume the two partitions of Linux are sda4 and sda5 sudo fdisk -l # format the second partition with label persistence and add persistence.conf sudo mkfs.ext4 -L persistence /dev/sda4 sudo mkdir -p /media/ubuntu/persistence sudo mount /dev/sda4 /media/ubuntu/persistence echo / union | sudo tee /media/ubuntu/persistence/persistence.conf # format the third partition with label casper-rw sudo mkfs.ext4 -L casper-rw /dev/sda5 # Reboot to start other configuration and installation


9. Reboot USB stick and choose grub2 menu "Windows 10 Pro Boot Camp" and "Ubuntu 18.04 persistent" for testing and further installations

10. The final USB stick partitions are
/dev/sda1 /media/ubuntu/UUID-XXXX (Windows E Drive FAT32 for EFI Boot and Grub2)
/dev/sda2 /media/ubuntu/WinToUSB (Windows F Drive NTFS primary partition for Windows 10)
/dev/sda3 /isodevice (Windows G Drive FAT32 Extended Partition for iso images)
/dev/sda4 /media/ubuntu/persistence (Linux ext4 primary partition for Ubuntu)
/dev/sda5 /media/ubuntu/casper-rw (Linux ext4 primary partition for Ubuntu)
It should be noted that UEFI MBR can only create maximum 4 primary partitions whereas UEFI GPT has no such limit. If for MBR the last partition /dev/sda5 can be logical in Ubuntu. There is no difference when using logical partition in Ubuntu.


11. For Retina Mac with very tiny size grub menu display, the solution is to create a larger font say 48 dpi for grub menu and enable it in grub.cfg
Please take note that Ubuntu 18.04 started to have better kernel support for MacBook Retina Display, Keyboard and TrackPad. # Install truetype font in Ubuntu
sudo apt-get install grub2
sudo apt-get install fonts-dejavu and fonts-dejavu-core
# Create font size 48
sudo grub-mkfont -s 48 -o /media/ubuntu/UUID-5637/grub/fonts/DejaVuSansMono.pf2 /usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf
# Edit and uncomment the grub.cfg file for the use of new font DejaVuSansMono.pf2
The converted font DejaVuSansMono.pf2 with font size 48 can also be downloaded here



Wednesday, April 15, 2020

How to create bootable DEBIAN PIXEL, STRETCH, BUSTER and Ubuntu 18.04 on one USB stick

This follows the previous post on "How to create bootable PIXEL USB stick for Mac


The Debian PIXEL is for x86 platforms. The PIXEL ISO, the latest is a 2.0GB download is here
https://downloads.raspberrypi.org/rpd_x86/images/rpd_x86-2017-06-23/2017-06-22-rpd-x86-jessie.iso

Raspberry Pi Desktop image Debian STRETCH latest is 2019-04-11-rpd-x86-stretch.iso (2.4G) is here
https://downloads.raspberrypi.org/rpd_x86/images/rpd_x86-2019-04-12/2019-04-11-rpd-x86-stretch.iso

The new RASPBERRY PI DESKTOP image Debian BUSTER 2020-02-12-rpd-x86-buster.iso (3.13GB) is here
https://downloads.raspberrypi.org/rpd_x86/images/rpd_x86-2020-02-14/2020-02-12-rpd-x86-buster.iso

And the first task is to create a USB stick for Mac/PC for three Debian OSs with persistent feature. Important: The Linux kernel might not be able to boot up for the modern MacBook Retina Display driver. You might need the original Ubuntu 18.04 image in later task for these MacBooks.

shell script    Select all
And here are the instructions to create EFI bootable USB stick on Mac # Running on Mac # list disk volumes diskutil list # assume format USB stick (128G) on /dev/disk1 with 2 partitions 16g and remaining 111g respectively # if for USB stick (64G) on /dev/disk1 with 2 partitions 8g and remaining 55g respectively # if for 32G USB stick, the 2 partition sizes can be divided into 4g and remaining 27g respectively sudo diskutil partitionDisk /dev/disk1 MBRFormat FAT32 LINUX 16g FAT32 PERSISTENCE 0b # for older Mac OSX 10.6, the partition type is "MS-DOS FAT32" # sudo diskutil partitionDisk /dev/disk1 MBRFormat "MS-DOS FAT32" LINUX 16g "MS-DOS FAT32" PERSISTENCE 0b # sudo diskutil partitionDisk /dev/disk1 MBRFormat "MS-DOS FAT32" LINUX 8g "MS-DOS FAT32" PERSISTENCE 0b mkdir -p /Volumes/LINUX/efi/boot # Download Enterprise-0.4.1.tar.gz to ~/Download # from https://github.com/SevenBits/Enterprise/releases cd ~/Downloads curl -OL https://github.com/SevenBits/Enterprise/releases/download/v0.4.1/Enterprise-0.4.1.tar.gz tar xzvf Enterprise-0.4.1.tar.gz Copy efi boot files to the USB stick cp ~/Downloads/Enterprise-0.4.1/*.efi /Volumes/LINUX/efi/boot/ Copy the jessie and buster iso to the USB stick cp ~/Downloads/2017-06-22-rpd-x86-jessie.iso /Volumes/LINUX/efi/boot/pixel.iso cp ~/Downloads/2019-04-11-rpd-x86-stretch.iso /Volumes/LINUX/efi/boot/stretch.iso cp ~/Downloads/2020-02-12-rpd-x86-buster.iso /Volumes/LINUX/efi/boot/buster.iso # create enterprise.cfg cat > /Volumes/LINUX/efi/boot/enterprise.cfg << EOF autoboot 0 entry Debian BUSTER non-persistence family Debian iso buster.iso initrd /live/initrd1.img kernel /live/vmlinuz1 findiso=/efi/boot/buster.iso boot=live config live-config quiet splash EOF # umount disk sudo diskutil unmountDisk disk1 # Reboot Mac and press Option key on restart and select EFI Boot for boot menu # Running on Debian PIXEL sudo fdisk -l #assume /dev/sdb is the USB stick sudo fdisk /dev/sdb # (d) (2) to delete partition 2 # and then (n) (p) (2) to recreate primary partition 2 for Linux in fdisk # (w) to write to partition table and quit fdisk # Reboot to let partition table effective # Running on Debian BUSTER # unmount /dev/sdb2 sudo umount /dev/sdb2 # format and label /dev/sdb2 sudo mkfs.ext4 -L persistence /dev/sdb2 # rename /dev/sdb2 if manually afterwards # sudo e2label /dev/sdb2 "persistence" # create persistence.conf sudo mkdir -p /mnt/persistence sudo mount -t ext4 /dev/sdb2 /mnt/persistence echo / union | sudo tee /mnt/persistence/persistence.conf #unmount /dev/sdb2 sudo umount /dev/sdb2 # Reboot # Running on Mac # recreate enterprise.cfg with persistence cat > /Volumes/LINUX/efi/boot/enterprise.cfg << EOF entry Debian BUSTER persistence family Debian iso buster.iso initrd /live/initrd1.img kernel /live/vmlinuz1 findiso=/efi/boot/buster.iso boot=live config live-config quiet splash persistence entry Debian STRETCH persistence family Debian iso stretch.iso initrd /live/initrd1.img kernel /live/vmlinuz1 findiso=/efi/boot/stretch.iso boot=live config live-config quiet splash persistence entry Debian PIXEL persistence family Debian iso pixel.iso initrd /live/initrd1.img kernel /live/vmlinuz1 findiso=/efi/boot/pixel.iso boot=live config live-config quiet splash persistence entry Debian BUSTER non-persistence family Debian iso buster.iso initrd /live/initrd1.img kernel /live/vmlinuz1 findiso=/efi/boot/buster.iso boot=live config live-config quiet splash EOF # umount disk sudo diskutil unmountDisk disk1 # Reboot and Running on Debian BUSTER to verify the persistence mounting df -h


For additional configuration and setting, or the Ubuntu images http://releases.ubuntu.com/16.04/ or http://releases.ubuntu.com/18.04/, and need to create a third partition casper-rw in Linux to enable persistence. please refer to the previous post.
For Ubuntu images, the related boot menu entries are as below, the non-persistent entry is used to setup the persistence and casper-rw partitions for the USB stick. The nomodeset is used when the video driver biit up failed with black screen. The Ubuntu 18.04 image should be more compatible with the modern Retina display
.
enterprise.cfg    Select all
entry Ubuntu 16.04.6 persistent family Ubuntu iso ubuntu-16.04.6.iso initrd /casper/initrd kernel /casper/vmlinuz findiso=/efi/boot/ubuntu-16.04.6.iso file=/cdrom/preseed/ubuntu.seed boot=casper persistent nomodeset --- entry Ubuntu 18.04.4 persistent family Ubuntu iso ubuntu-18.04.4.iso initrd /casper/initrd kernel /casper/vmlinuz findiso=/efi/boot/ubuntu-18.04.4.iso boot=casper persistent quiet splash --- entry Ubuntu 18.04.4 non-persistent family Ubuntu iso ubuntu-18.04.4.iso initrd /casper/initrd kernel /casper/vmlinuz findiso=/efi/boot/ubuntu-18.04.4.iso boot=casper quiet splash ---


Wednesday, April 8, 2020

Install Windows 10 on External SSD HD

I find this tutorial to install Windows 10 is useful for creation of Windows to Go USB Disk on SSD Drive.
https://youtu.be/U6UsStKUIL8

But there are some points to note as below:

(1) I used a windows 10 machine and the Microsoft Media Creation Tool to create the windows.iso which includes the latest 2019 update of Windows 10. The created iso image Windows_10_1909_EN_64bits.iso can be downloaded here.

(2) Samsung T5 is good with 2 cables USB-C to USB-A and USB-C to USB-C. It has 500GB, 1T and 2T options.

(3) The WinToUSB Free Version can only create partition Home Edition and Education Edition of Windows 10 to the external HD. For Win 10 Pro Version you have to purchase the Prof or Enterprise License. For partition option, the free version could choose GPT for UEFI, the more flexible option MBR for BIOS and UFEI is only available for Prof version. The external HD partition will be erased and not kept if these 2 options are chosen. You have to use other re-partition software if additional FAT volume needed

(4) The Boot Camp Windows Support files are copied to the external HD in Public Folder so that drivers can be installed right after external HD Windows 10 boot up.

(5) For MacBook with one USBC port, have to prepare a USB-C hub and a wired mouse for initial windows setup, as the MacBook keyboard and mouse do not work at all initially.

(6) Windows 7 Key or 8 product key (box version / retail version / education version) can be used for clean install of Windows 10.

(7) I used 32 bits Windows 8 Pro Education Edition product key to activate this clean installation of Windows 10 and now become Windows 10 Pro 64 bits version.



To do list

(1) To shrink the NTFS create a FAT32 partition for transferring data between Mac and Windows. Solution : Use IM Magic Partition Resizer (Windows software)
(2) Use Tuxera ntfs ($15) for macOS for write access to NTFS partition.
(3) Use the Mac OS experimental support for writing to NTFS drives. But not recommended as reference from here https://www.howtogeek.com/236055/how-to-write-to-ntfs-drives-on-a-mac/



Tested with the following software

Photshop
After Effect
Illustrator



Swift 5.2 on Ubuntu for Windows 10 bash shell

Prerequisite:
Windows 10 64 bits Home or Pro Edition
Enable WSL via powershell as Administrator
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform


Install Ubuntu App from Microsoft Store

Swift 5.2 on Ubuntu (18.04 LTS) for Windows 10 bash shell

You need to download and extract the Release (Ubuntu 18.04) from swift.org and do these under bash shell of Ubuntu App.
shell script    Select all
# Show Ubuntu Version lsb_release -a # download and extract swift 5.2 Release cd ~/ wget https://swift.org/builds/swift-5.2-release/ubuntu1804/swift-5.2-RELEASE/swift-5.2-RELEASE-ubuntu18.04.tar.gz tar xzvf swift-5.2-RELEASE-ubuntu18.04.tar.gz sudo mv ~/swift-5.2-RELEASE-ubuntu18.04 /usr/share/swift-5.2 # install packages and development tools sudo apt-get update sudo apt-get install -y clang sudo apt-get install -y libcurl3 libpython2.7 libpython2.7-dev sudo apt-get install -y libcurl4-openssl-dev sudo apt-get install -y git build-essential # add path in ~/.bashrc echo "export PATH=/usr/share/swift-5.2/usr/bin:\$PATH" >> ~/.bashrc # disable color in ~/.bashrc echo 'export TERM=xterm-mono' >> ~/.bashrc #reload ~/.bashrc source ~/.bashrc Check the Swift version swift --version # test Foundation, libdispatch and swiftc compile cd $HOME cat > hello.swift <<EOF import Foundation let device = "WIN10" print("Hello from Swift on \(device)") print("\(NSTimeZone.default.abbreviation()!) \(NSDate())") // Test libdispatch import Dispatch var my_dispatch_group = DispatchGroup() let concurrentQueue = DispatchQueue(label: "myqueuename", attributes: DispatchQueue.Attributes.concurrent) for a in 1...20 { my_dispatch_group.enter() let block = DispatchWorkItem { print("do something at \(a)") } my_dispatch_group.leave() my_dispatch_group.notify(queue: concurrentQueue, work:block) } let item = DispatchWorkItem { print("PROGRAM ENDED \(NSTimeZone.default.abbreviation()!) \(NSDate())") } my_dispatch_group.notify(queue: DispatchQueue.global(qos:.userInitiated), work:item) print("press enter to exit") let _ = readLine(strippingNewline: true) EOF swiftc hello.swift ./hello # test Swift Package Manager mkdir -p $HOME/test1 cd $HOME/test1 swift package init swift test


Follow this guide to install docker on Ubuntu 16.0.4 LTS. Currently 18.0.4 is not working for docker CE on WSL.
https://medium.com/faun/docker-running-seamlessly-in-windows-subsystem-linux-6ef8412377aa


To get Swift 5.2 on Ubuntu 16.0.4
wget https://swift.org/builds/swift-5.2-release/ubuntu1604/swift-5.2-RELEASE/swift-5.2-RELEASE-ubuntu16.04.tar.gz
tar xzvf swift-5.2-RELEASE-ubuntu16.04.tar.gz
sudo mv ~/swift-5.2-RELEASE-ubuntu16.04 /usr/share/swift-5.2




For docker on WSL, you might encounter error on apt-get update with Error in GPG signature. The temporary solution is to change storage driver to vfs as the default overlay2 driver does not work under wsl. But the performance of vfs is really poor.
shell script    Select all
# create the following file in /etc/docker/daemon.json and restart the machine and docker daemon echo '{"storage-driver": "vfs"}' | sudo tee /etc/docker/daemon.json


Click on the docker label below to see some previous examples on using docker.



For example using the quantlib-python3 docker image
docker run -t -i lballabio/quantlib-python3:latest bash

To test the quantlib-python3 docker image
shell script    Select all
# Display unbuntu version apt-get update apt install lsb-release lsb_release -a # Test python 3 QuantLib apt-get install python3 python3-pip -y pip3 install numpy QuantLib-Python==1.18 pip3 freeze cd $HOME cat > $HOME/swap.py <<EOF from __future__ import print_function import numpy as np import QuantLib as ql print("QuantLib version is", ql.__version__) # Set Evaluation Date today = ql.Date(31,3,2015) ql.Settings.instance().setEvaluationDate(today) # Setup the yield termstructure rate = ql.SimpleQuote(0.03) rate_handle = ql.QuoteHandle(rate) dc = ql.Actual365Fixed() disc_curve = ql.FlatForward(today, rate_handle, dc) disc_curve.enableExtrapolation() hyts = ql.YieldTermStructureHandle(disc_curve) discount = np.vectorize(hyts.discount) start = ql.TARGET().advance(today, ql.Period('2D')) end = ql.TARGET().advance(start, ql.Period('10Y')) nominal = 1e7 typ = ql.VanillaSwap.Payer fixRate = 0.03 fixedLegTenor = ql.Period('1y') fixedLegBDC = ql.ModifiedFollowing fixedLegDC = ql.Thirty360(ql.Thirty360.BondBasis) index = ql.Euribor6M(ql.YieldTermStructureHandle(disc_curve)) spread = 0.0 fixedSchedule = ql.Schedule(start, end, fixedLegTenor, index.fixingCalendar(), fixedLegBDC, fixedLegBDC, ql.DateGeneration.Backward, False) floatSchedule = ql.Schedule(start, end, index.tenor(), index.fixingCalendar(), index.businessDayConvention(), index.businessDayConvention(), ql.DateGeneration.Backward, False) swap = ql.VanillaSwap(typ, nominal, fixedSchedule, fixRate, fixedLegDC, floatSchedule, index, spread, index.dayCounter()) engine = ql.DiscountingSwapEngine(ql.YieldTermStructureHandle(disc_curve)) swap.setPricingEngine(engine) print(swap.NPV()) print(swap.fairRate()) EOF # Test python3 cd $HOME python3 swap.py # Install boost 1.71 apt-get install build-essential export boost_version=1.71.0; export boost_dir=boost_1_71_0; cd $HOME; wget https://dl.bintray.com/boostorg/release/${boost_version}/source/${boost_dir}.tar.gz export boost_version=1.71.0; export boost_dir=boost_1_71_0; cd $HOME; tar xfz ${boost_dir}.tar.gz && cd ${boost_dir} && ./bootstrap.sh && ./b2 --without-python --prefix=/usr -j 4 link=shared runtime-link=shared install && cd .. && rm -rf ${boost_dir} && ldconfig # Install quantlib 1.17 export quantlib_version=1.17; cd $HOME; wget https://dl.bintray.com/quantlib/releases/QuantLib-${quantlib_version}.tar.gz export quantlib_version=1.17; cd $HOME; tar xfz QuantLib-${quantlib_version}.tar.gz && cd QuantLib-${quantlib_version} && ./configure --prefix=/usr --disable-static CXXFLAGS=-O3 && make -j 4 && make install && cd .. && ldconfig # Test quantlib 1.17 #Create testql.cpp cd $HOME cat > testql.cpp << 'testqlEOF' #include <ql/quantlib.hpp> int main() { std::cout << "BOOST version is " << BOOST_VERSION << std::endl; std::cout << "QL version is " << QL_VERSION << std::endl; #if __x86_64__ || __WORDSIZE == 64 std::cout << "This is 64 bits" << std::endl; #elif __i386__ || __WORDSIZE == 32 std::cout << "This is 32 bits" << std::endl; #else std::cout << "This is something else" << std::endl; #endif return 0; } testqlEOF g++ testql.cpp -lQuantLib -o testql ./testql # Test QuantLib Examples cd $HOME g++ QuantLib-*/Examples/Bonds/Bonds.cpp -lQuantLib -o testBonds ./testBonds cd $HOME g++ QuantLib-*/Examples/FRA/FRA.cpp -lQuantLib -o testFRA ./testFRA # Python 2 installation apt-get install python python-pip -y pip2 install numpy QuantLib-Python==1.17 pip2 freeze apt-get install git -y # Test python2 cd $HOME git clone git://github.com/mmport80/QuantLib-with-Python-Blog-Examples.git cd QuantLib-with-Python-Blog-Examples/ python2 blog_frn_example.py cd $HOME python2 swap.py



Export and Import of container
docker export CONTAINER_NAME | gzip > NAME.gz
zcat NAME.gz | docker import - IMAGE-NAME