ESP32 based old clock controller, with NTP sync

I have been able to find an old “slave” clock from 1960 (according to the serial number).
It is Pragotron PJ 27, 12V version. They have been produced in Czechoslovakia during
196x years and using the PS-1 mechanism.

This type of clocks usually been used in organizations with central time management systems, e.g. schools, factories, etc. To drive them initially big mechanical “master” clock has been used, later such systems been replaced by digital one.

The clock mechanism is very simple and expects 1m impulse with different polarity on every run to move the arrows. There are also 6V, 24V, and 60V versions.

Making new clock controller hardware

After cleaning up clocks from the dust I tested them with a 12V power supply and found
that they are still working. As I don`t have any master clock to drive them – I
decided to build my controller. I had already an ESP32 controller with an OLED screen
on-board, so decided to utilize it. Also, I found a 220V โ†’ 12V/1A power supply left
from another project.

To generate 12V impulses with different polarities i decided to buy L298N DC Motor Driver Module.
It should be more reliable compared to a set of relays, polarity could be set using
TTL inputs and the module itself is very cheap (1-2$). Also, it provides a 12v โ†’ 5v converter,
so we can power our ESP32 board from it. I am currently using only one channel to drive clocks,
the second channel could be used for the alarm or clock in a different timezone.

Wiring, in this case, was very simple: 12V is wired directly to the L298N controller, 5V L298N
output and GND โ†’ to the ESP32 5V and GND, and L298N IN1 and IN2 are connected to the GPIO
pins 12 and 13. Clocks are connected to the L298N OUT1 motor output.

Software part

As ESP32 does not have real RTC I decided to use NTP over WIFI as a precise time source. This way I can avoid using an additional RTC module with battery.
To store slave clock status I am using ESP32 flash. I built software for the controller
using the Arduino IDE. Some of the features implemented:

  • On boot, it connects to WIFI and using NTP to get actual time. After initial sync
    time is updated from NTP every 5 minutes.
  • Timezone support is implemented using Timezone.
  • Actual time, slave clock status, wifi status, and NTP sync status is displayed on the OLED screen.
  • There is a special “init” mode which is enabled by touching GPIO15 and reboot.
    In this mode, impulses are generated every second. When the slave clock set to 12:00
    the pin needs to be released. This mode is useful for the initial setup or testing of the slave.
  • State is saved every minute to the ESP32 Flash using the “Preferences” library. It is requesting
    “nvs” partition which implements basic wear leveling. To make it more efficient I changed the NVS partition size to 1Mb.
  • To avoid OLED degradation in a 10m screen goes to the “screen-saver” mode. To exit screensaver – touch GPIO15 pin.

Code is available on the samm-git/clock-controller-esp github repo, comments are welcome.

Summary

It was an interesting journey to add NTP and wifi support to the device from 1960.
I was able to find a case from the old-time relay which now hosts my controller.
Maybe in the future, I will add more devices to this master.

Tagged , ,

How to Transfer pictures wirelessly for a Sony camera without using Playmemories on MacOS

Sony cameras can transfer photos to PC using a WIFI connection. To implement support on a host they provided software called “Playmemories”. This software was never working very well and eventually was completely abandoned by Sony. After upgrading to macOS Catalina I found that it is not working anymore, without any vendor updates available. So I started to look for alternatives.

How it works

Internally Sony using PTP/IP protocol to transfer files. The device starts sending the UDP packet to the 239.255.255.250:1900 broadcast address when the user choosing the ‘Send to Computer’ option from the camera menu. Software (Playmemories) captures this packet, connects to the camera, and start the sync process.

After the initial network configuration camera needs to be connected to the station using USB to set PTP/IP GUID. This is a one-time operation which is also handled by Playmemories.

OpenSource implementations

I have been able to find 2 working OSS implementations for that:

  • falk0069/sony-pm-alt – using gphoto2 and python wrapper and provides C program to set initial guid on camera.
  • shezi/airmtp – pure Python2 MTP implementation, includes some additional options. Does not provide any way to initialize the camera over USB.

I found that both tools are working fine.

sony-pm-alt

The tool has an initial GUID setter, which using libusb-1.0 to configure GUID on camera. I have been able to compile it on macOS:

clang `pkg-config libusb-1.0 --libs --cflags` sony-guid-setter.c -o sony-guid-setter

However, found that it is useless, as macOS opens USB exclusively when you connecting the camera. To workaround that I compiled this tool in the VirtualBox Linux VM and passed the camera using USB pass-thru functionality. This was working well and I got the camera configured. In my case exact command was sudo ./sony-guid-setter.o 054c:08af -g. This operation needs to be done only once per camera network setup.

To use wireless transfer I installed gphoto2 from brew (brew install gphoto2) and changed PHOTO_DIR in the sony-pm-alt.py to the user folder. I found that this application is working fine and able to transfer photos from the camera.

airmtp

AirMTP is a pure python2 application, so it does not have any external dependencies. Also, it supports many additional sync options, e.g. ability to skip old files, specify extensions to download, etc.

To use it with the camera you need to run it with a command-line like airmtp.py --ipaddress auto --outputdir and start the transfer on the camera.

I found that the performance of both tools is +- the same, so decided to use airmtp as it has no external dependencies and supports more options. To run it on startup I created such plist and places it in the ~/Library/LaunchAgents/airmtp.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>KeepAlive</key>
    <true/>
    <key>Label</key>
    <string>airmtp</string>
    <key>ProgramArguments</key>
    <array>
      <string>/usr/local/airmtp/airmtp.py</string>
      <string>--extlist</string>
      <string>JPG</string>
      <string>--ipaddress</string>
      <string>auto</string>
      <string>--outputdir</string>
      <string>/Users/user/pics</string>
      <string>--ifexists</string>
      <string>skip</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
  </dict>
</plist>

To enable service do launchctl load -w ~/Library/LaunchAgents/airmtp.plist. Additionally, I added python code which adds macOS notification on camera connect and file transfer. I will publish my changes later.

Summary

Thanks to OSS software – i can use camera sync functionality again and this time without vendor lock.

Tagged ,

AWS Client VPN internals

About AWS Client VPN

AWS Client VPN is a managed client-based VPN service provided by AWS. With Client VPN, it is possible to access AWS resources from any location using an OpenVPN-based VPN client.

Recently AWS added ability to use SAML IdP for user authorization and authentication (see image).

aws-saml

However, SAML federation requires you to use proprietary AWS VPN Client which is available only for Windows and macOS. Moreover, the client is a closed source and very limited.

In this post, I will show how it works under the hood and how to connect to it using the native OpenVPN binary.

Update from 25.9.20 – AWS source code found

AWS published changes to the OpenVPN on their S3, with link available in the “About” window. Patch in my repository updated to the official one, everything else is still relevant.

What is wrong with “native AWS client”

  • Available only for the macOS and Windows. OpenVPN itself supports pretty every platform which can establish VPN connections.
  • Closed source. In combination with root access, it adds security risks.
  • VERY limited. No hooks, no timeouts, no log export, nothing at all. Just “import profile” and “Connect”. If things are not working – you have to search for the logs in the /tmp folder (??).
  • Client whitelisting only very few OpenVPN options. If you will try to add any non-whitelisted option to the config – the client will fail to start. It is including inactivity timeout settings, scripting, etc.
  • No documentation about the ability to create customized packages with pre-loaded config

Hopefully, AWS would address some of these limitations in the future and publish source code of their “client”.

How “native client” works

I was using Wireshark and LLDB tools to find out how the client really works. How user flow looks like:

  1. On the first run, you are importing a profile to the AWS VPN Client. Client detecting auth-federate keyword in it and saving config in the ~/.config/AWSVPNClient/OpenVpnConfigs. Special auth-federate keyword is removed at this stage.
  2. The user running the AWS VPN Client and using the “connect” menu to connect.
  3. AWS VPN Client opening a web browser and redirects to the SAML IdP page. After the authorization browser shows “Authentication details received, processing details. You may close this window at any time.” message.
  4. The client connects to the gateway and traffic starts to go via VPN.

Now, let’s take a look at what is going on internally.

  1. The wrapper on Mono starting OpenVPN binary (part of the package) and starts the HTTP server at http://127.0.0.1:35001/ address.
  2. Using the OpenVPN management interface it is asking to connect to the provided gateway with the username N/A and password ACS::35001. This (of course) fails with an authentication failure, but as failure reason VPN server sends SAML redirect URL.
  3. Wrapper using this URL and opening it in the browser. If SAML flow succeeds – IdP will redirect the browser with POST data to the http://127.0.0.1:35001/. HTTP POST data contains SAMLResponse field. On this URL mono wrapper capturing it.
  4. Mono wrapper asks OpenVPN to establish connection second time, but now with N/A as username and SAMLResponse + some session data as a password.
  5. AWS VPN server validates them, and if they are looking valid (e.g. signed by corresponding IdP, etc) start the session.

How to connect with OSS OpenVPN to the AWS Client VPN using SAML

I decided to emulate this flow. I started with writing small HTTP server on golang, which listens on 127.0.0.1:35001 and saving SAMLResponse POST form field to the file. The next step was to write a shell wrapper which emulates the activity of the Mono wrapper in the AWS Client. I decided not to use the management interface but to run the OpenVPN binary directly.

Surprisingly I been able to get the connection up, but only with acvc-openvpn binary from the AWS VPN Client.app package. So I decided to build OpenVPN myself to debug why it is not working with OSS binary. After some experiments reason was found:

  • Password length in the OSS OpenVPN is up to 128 bytes. SAML response is ~11Kb. I extended this size but got another problem related to the TLS error.
  • After all, I found that the password block is not fitting into TLS_CHANNEL_BUF_SIZE limit in the OpenVPN, so I had to extend it as well.
  • I been able to connect. Eventually i found OpenVPN modified source code and it shows that my changes are similar to AWS one, but they set much higher limits (up to 256Kb). My repo was updated to include AWS patch instead of mine.

Patch is available here. At this point, I was able to connect and use a VPN.

TODO

So far my PoC can connect to the VPN Server. After connect, it is working the same way as AWS client. I already tested both TCP and UDP setup, with 443 and 1194 ports. Some things to do (if I will have some time)

  • Make golang wrapper smarter and replace shell wrapper entirely
  • Think how to integrate this with tunnelblick or other OSS UI for the OpenVPN

As usual – patches and contributions are welcome, repository URL is github.com/samm-git/aws-vpn-client.

Tagged , ,

Official MacOS NVME Smart header found

Finally Apple published header of the NVME Smart header (NVMeSMARTLibExternal.h). It was found in the latest XCode update by Harry Mallon, who also provided initial version of the Smartmontools patch to use it. I adopted and fixed this patch, so latest smartmontools version got Log Pages support on macOS. Original Apple header could be found on my gist.

Good news is that my effort to reconstruct API was mostly correct. I also found some functions (GetFieldCounters, GetSystemCounters, GetAlgorithmCounters) not exported officially. Also mystery with non-working GetLogPage function resolved – second parameter is size (as i expected) but in DWORD-s + starting from 0 (e.g. 1 == 2 DWORDS == 64 bits) and there is a strict validation of it.

Tagged , , ,

Migration to Let’s Encrypt V2 API with acmetool

Why acmetool?

A long time ago i migrated from certbot to the acmetool due to its simplicity and much better design. It is still working perfectly, managing many certificates without any headache. The only problem was new (V2) ACME API, which will be mandatory to use starting from July 2020. The development of acmetool is not very active, but at some point, the author provided a new (beta) release with V2 protocol support. The migration process is not documented, so I decided to make this blog post.

How to migrate

I would recommend starting with the backup of the ACME_STATE_DIR directory first. It should be located on /var/lib/acme on Linux and /var/db/acme on the FreeBSD. During migration content of the directory will be changed.

Next thing is to install the new binary. I already updated acmetool FreeBSD port and found that it is also updated in the Debian SID. If your OS does not have it updated yet – binary could be easily build using a recent golang compiler. When the binary upgrade is done – you can run acmetool status and it will show you your existing domains. Now run acmetool quickstart and choose Let's Encrypt (Live v2) server. Continue with configuration. When done – run acmetool status – all your existing domains should use V2 API from now. Last step is to go to the /var/lib/acme/accounts and remove directory started with acme-v01. Run acmetool status again to validate that only the V2 account is now available.

I did it on a number of the Linux and FreeBSD servers and everything went just fine.

Tagged , ,

FreeBSD – automatic services restart using fscd tool

Overview

FreeBSD comes with a rudimentary rc(8) based system which is based on the shell scripts using functions from rc.subr. From one side – it is stable, well documented and backward compatible. From another – it lacks a lot of the features i would expect from the recent init. One of it is automatic service restart in case of service crash or failure. However, there are number of workarounds to do the job, but most of them are not really well integrated with the native system. After all i been able to find fscd tool which do the job very well and is designed to run with the FreeBSD init.

I am currently using this tool on both server and embedded FreeBSD deployments.

Reasons to use (from the author homepage):

  • kqueue() support provides push rather than pulling the applications, reducing system resources;
  • Integration with FreeBSD’s rc and service utilities do not require much overhead for configuration;
  • Other applications may be too bloated or too configuration heavy for some reasons.

Installing and configuring

FSCD tool could be installed from the FreeBSD packages using pkg install fscd command. Source code and documentation could be found on the github.com/bsdtrhodes/freebsd-fscd.

After installation do the following steps:

  1. Create file /usr/local/etc/fscd.conf and list all the services you want to monitor in it (line by line). E.g. in my case it is
    nagios
    syslog-ng
    syslogd
    php-fpm
    nginx
    quagga
    ntpd
    bsnmpd
    fcgiwrap
    exim
    syncthing
    openvpn
    smartd
    
  2. I would suggest to edit /usr/local/etc/rc.d/fscd and add services you are controlling to the REQUIRE section. This is recommended to avoid fscd startup before services it is going to control, and as result – early start of the listed services. In my example line looks like REQUIRE: nagios syslogd php-fpm nginx quagga ntpd bsnmpd fcgiwrap mail syncthing openvpn smartd. Test your changes using rcorder /etc/rc.d/* /usr/local/etc/rc.d/* command.
  3. Enable fscd in the /etc/rc.conf by setting fscd_enable="YES" and start the service.

Testing and using fscd

Start fscd using service fscd start command. Check that fscd is working using fscadm status command:

# fscadm status
The fscd pid is 6327.
process name                             pid
--------------------------------------------------
nagios                                   4988
smartd                                   1458
openvpn                                  1645
syncthing                                1429
exim                                     1435
fcgiwrap                                 3359
bsnmpd                                   3364
ntpd                                     1380
quagga                                   467
nginx                                    1666
php-fpm                                  1635
syslogd                                  1215
syslog-ng                                1206

Kill any of the monitoring services. If command exited with error code – fscd will wait ~1 minute for the graceful restart, if code is different – it will restart it immediately. All actions are logged in the syslog. As a side effect – if you will stop service using service myservice stop fscd will automatically restart it. To avoid that you can temporary disable service in fscd using fscadm disable command. Later service could be re-enabled using fscadm enable command.

Thats it ๐Ÿ™‚ Of course i would like to see such functionality as part of the init system, but such workaround also works well for my usecases.
Continue reading

Tagged

Getting T-mobile CZ Account Balance programmatically

API Last approach

For some reason most of the mobile (and utility) operators never providing API to get the account balance. Usually your choice is JavaScript-only, slow and buggy website and in some cases also mobile app, which will request permissions for everything possible and will show you the ads as a bonus. At least for the SIM card in the phone you usually have USSD option, however, for the IOT devices its not so easy. This small script gets balance for the t-mobile pre-paid card, using web site as a crawler. I decided to share it to save your time if you will need to do the same. I am planning to convert it to the Nagios plugin to check the balance and alert if/when needed.

Ugly PHP code

Using Developer Tool and a lot of time i been able to reproduce logic which grabs the balance. Web site is javascript-only and at some point i was ready to give-up – so complex and non-obvious procedure is :-/ Of course its expected to break at the moment when site will be changed.

<?php

$creds = array(
  'username'=>'account@example.com',
  'password'=>'MyPassWord123'
);

$postvars = '';
foreach($creds as $key=>$value) {
    $postvars .= $key . "=" . $value . "&";
}

// login and fetch auth related cookies
$ch = curl_init('https://www.t-mobile.cz/.gang/login');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $postvars);
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Cookie: gPcookie=1; "));
// Ask for the callback.
curl_setopt($ch, CURLOPT_HEADERFUNCTION, "curlResponseHeaderCallback");
$cookies = Array();
$result = curl_exec($ch);

switch ($http_code = curl_getinfo($ch, CURLINFO_HTTP_CODE)) {
  case 302:  // if user/password matches there is redirects to another page
    break;
  default:
    die('FATAL: Unexpected HTTP code: '. $http_code. ", check username/password\n");
}

foreach($cookies as $v){
  $tmp=explode('=',$v[1],2);
  $cookie_arr[$tmp[0]]=$tmp[1];
}
curl_close($ch);

// validate that we got all required cookies
$fields_req=array("JSESSIONID","gftCookie","gTcookie","gAcookie","gScookie");
foreach ($fields_req as $key) {
  if(!isset($cookie_arr[$key])) {
    die("FATAL: Unable to get cookie: $key\n");
  }
}

// cookies expected on every request
$theader=array("Cookie: AJAXIBLE_JAVASCRIPT_ENABLED=true; gPcookie=1; JSESSIONID=".
  $cookie_arr["JSESSIONID"]."; gftCookie=".$cookie_arr["gftCookie"].
  "; gTcookie=".$cookie_arr["gTcookie"]."; gAcookie=".$cookie_arr["gAcookie"].
  "; gScookie=".$cookie_arr["gScookie"].";");

// getting lazy block id to fetch balance block
$ch = curl_init("https://www.t-mobile.cz/muj-t-mobile/-/module/myTariff?_rcorevccm_WAR_vcc_menu=mainmenu&_rcorevccm_WAR_vcc_menuCode=myTariff");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, $theader);
$result = curl_exec($ch);
if(!preg_match('/ data-lazy-loading="{"rk":"(\d+)"}" id="(\d+)"/', $result, $lazy_ids)){
  die("FATAL: Unable to get lazy block ids\n");
}
curl_close($ch);

// getting actual balance frame
$ch = curl_init('https://www.t-mobile.cz/muj-t-mobile/-/module/myTariff?p_p_id=rcorevccm_WAR_vcc&p_p_lifecycle=0&p_p_state=exclusive&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_rcorevccm_WAR_vcc_moduleCode=myTariff&_rcorevccm_WAR_vcc_lazyLoading=true&_rcorevccm_WAR_vcc_componentIds='.$lazy_ids[1].'.'.$lazy_ids[2]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, $theader);
$result = curl_exec($ch);
curl_close($ch);

// normalize
$result=preg_replace('/\s+/'," ",$result);

if(!preg_match('/Vรฝลกe kreditu<\/div> <\/th> <td class="text-right"> <div class="text-xlarge"> <strong>([\d,]+)<\/strong>/', $result, $matches)){
  die("FATAL: Unable to get balance\n");
}
$balance=(float)str_replace(",",".",$matches[1]);
printf("Balance: %01.2f CZK\n", $balance);

// callback to get cooikes
function curlResponseHeaderCallback($ch, $headerLine) {
    global $cookies;
    if (preg_match('/^Set-Cookie:\s*([^;]*)/mi', $headerLine, $cookie) == 1)
        $cookies[] = $cookie;
    return strlen($headerLine); // Needed by curl
}

?>

P.S. I been able also to grab csv file from the web site with a last statement, however, it seems that data from the “current status” frame is more up to date.

Tagged ,

Fixing USB detection with uboot on RPI

As part of my experiments with RPi/FreeBSD i decided to move FreeBSD completely to the USB token. Some of the pros:

  • Easy to mount on normal laptop if i need to fix something, backup, etc.
  • I have tonn of them ๐Ÿ™‚
  • I think that performance of the recent usb3 flash is better compared to typical SD. However need to verify that.
  • Possibility to use SSD/HDD via USB-Sata bridge

Cons:

  • RPi 1 does not support real usb boot, you still need to have SD Card to initiate boot process

How RPi 1 boots with FreeBSD:

RPi expects FAT partition on SD Card which contains some files to boot. This includes bootcode.bin (removed on RPi4), start*.elf (primary firmware, different versions, reads config.txt file), u-boot (specified as kernel=u-boot.bin in the config.txt) which loads ubldr (arm port of the loader(8)). UBLDR scanning for the UFS partition to load kernel and pass control to it. Diagram of the process provided below:

boot.png

As ubldr does not have hardware specific code it is relying on u-boot API to access the disks. And here problems starts – 2 of my 3 USB drives were showing

starting USB...
USB0:   Core Release: 2.80a
scanning bus 0 for devices... usb_new_device: Cannot read configuration, skipping device XXXX:XXXX

on boot. I decided to debug why it happens.

Debugging u-boot

First thing i decided to do is to build a latest u-boot for the RPI platform. To do this i been using official u-boot git – git://git.denx.de/u-boot.git. I was using docker container with Ubuntu 18.04 to avoid any build issues. You will have to install cpp-arm-linux-gnueabi, gcc-arm-linux-gnueabi and binutils-arm-linux-gnueabi packages. Next is to set cross-compiler using export CROSS_COMPILE=arm-linux-gnueabi- command.

In the u-boot folder we will need to configure u-boot for our board and usecase:

  1. Run make rpi_defconfig. This will select our board.
  2. Using make menuconfig enable CONFIG_API configurable (or just add CONFIG_API=y to .config file)
  3. Run make command. This should create u-boot.bin file which you could copy to the SD Card. I am recommending to use different name, e.g. uboot-new.bin, this would allow to revert to the previous loader. Change kernel line in the config.txt to point to the new u-boot.

At this point i got 2 news: good one that my u-boot is working and bad that it has 100% same USB issue. So i decided to debug it. You can enable USB debugging by adding #define DEBUG in the beginning of the common/usb.c file and re-compile u-boot. This would print a lot of debug lines on initial boot and we have a typical heisenbug after this stage – USB detection is fixed ๐Ÿ™‚ Okay, at least we know that this is a timing issue and probably debug output prevents it to happens.

After number of experiments i was able to get it working without debug. Here is a diff:

diff --git a/common/usb.c b/common/usb.c
index b70f614d24..121df520bc 100644
--- a/common/usb.c
+++ b/common/usb.c
@@ -1086,10 +1086,11 @@ int usb_select_config(struct usb_device *dev)
         * requests in the first microframe, the stick crashes. Wait about
         * one microframe duration here (1mS for USB 1.x , 125uS for USB 2.0).
         */
-       mdelay(1);
+       mdelay(300);

        /* only support for one config for now */
        err = usb_get_configuration_len(dev, 0);
+       mdelay(100);
        if (err >= 0) {
                tmpbuf = (unsigned char *)malloc_cache_aligned(err);
                if (!tmpbuf)
@@ -1107,6 +1108,7 @@ int usb_select_config(struct usb_device *dev)
        usb_parse_config(dev, tmpbuf, 0);
        free(tmpbuf);
        usb_set_maxpacket(dev);
+       mdelay(100);
        /*
         * we set the default configuration here
         * This seems premature. If the driver wants a different configuration

After all it detects USB device and now it is visible (and bootable!) in ubldr.

Bus usb@7e980000: scanning bus usb@7e980000 for devices... 4 USB Device(s) found
       scanning usb for storage devices... 1 Storage Device(s) found

Ubldr will scan all devices and will try to boot from the first bootable ufs parition found. This way i was able to move all the files to the USB flash with only few files left on the SD.

Tagged , , ,

BusyBox on the FreeBSD

As an experiment i decided to play with a BusyBox on a FreeBSD. BusyBox combines tiny versions of many common UNIX utilities into a single small executable.

Some of my goals

  1. Create minimal environment suitable for the embedded use. BusyBox on Linux provides a fairly complete environment for any small or embedded system, so was thinking to try the same on the FreeBSD
  2. Attempt to reduce Rasberry/BSD boot time. My profiling shows that actually userland boot make take same amount of time as kernel, sometime more. I think reason could be BSD RC init, some of the /sbin/init logic, etc. Not really easy to profile as a lot of this tools are not providing timestamp. To do some “poor man profiler” i patched cu tool to show timestamp on every line printed.
  3. Create tiny environment for the FreeBSD Jail. BusyBox could be compiled statically and is commonly used in Docker as a minimal base. Also some of the projects using BusyBox to create custom applets in the embedded world (e.g. RIPE Atlas or Ubiquiti devices)
  4. Self education, to better understand how FreeBSD init and friends works, to compare this with Linux one

Initial state

I was surprised to find that BusyBox is already exists in the FreeBSD Ports. However, version in the port is outdated and does not contain many must-have applets. Moreover – i found that it is crashing on arvmv6 arch (tested on Rasberry Pi 1). However, it was a good start! Also i found that there is some very basic and initial support in the busybox source code, so hopefully author would accept non-Linux patches.

So i decided to update port to the latest version and to fix some issues found. Issue with an arm crash is actually clang/arm problem which i was able to workaround. Also i been able to fix few applets and get more tools working

Current state and future improvements:

I submitted PR to update port in the latest version and to include fixes i have done. Currently such applets are compiled in:

addgroup, ar, arch, ash, awk, base64, basename, bc, bunzip2, bzcat, bzip2, cal, cat, chgrp, chmod, chown, chroot, cksum, clear, cmp, comm, cp, cpio, crontab, cttyhack, cut, dc, dd, delgroup, diff, dirname, dnsd, dos2unix, dpkg, dpkg-deb, du, echo, ed, env, expand, expr, factor, fakeidentd, fallocate, false, fatattr, find, flock, fold, fsync, ftpd, ftpget, ftpput, getopt, grep, groups, gunzip, gzip, hd, head, hexdump, hexedit, hostid, hostname, httpd, id, inetd, install, iostat, ipcalc, kill, killall, killall5, less, link, ln, logger, logname, logread, lpq, lpr, ls, lzcat, lzma, lzop, man, md5sum, microcom, mkdir, mkfifo, mknod, mktemp, more, mpstat, mv, nc, nice, nl, nmeter, nohup, nologin, nuke, od, paste, patch, pgrep, pidof, pipe_progress, pkill, pmap, poweroff, printenv, printf, ps, pscan, pwd, pwdx, readlink, readprofile, realpath, reboot, renice, reset, resize, resume, rev, rm, rmdir, rpm, rpm2cpio, run-parts, scriptreplay, sed, seq, setsid, sh, sha1sum, sha256sum, sha3sum, sha512sum, shred, shuf, sleep, smemcap, sort, split, ssl_client, stat, strings, stty, su, sulogin, sum, svok, sync, syslogd, tail, tar, tee, telnet, telnetd, test, tftp, tftpd, timeout, top, touch, tr, traceroute, traceroute6, true, truncate, tty, ttysize, uname, uncompress, unexpand, uniq, unix2dos, unlink, unlzma, unxz, unzip, usleep, uudecode, uuencode, vi, volname, watch, wc, wget, which, whoami, xargs, xxd, xz, xzcat, yes, zcat

I been able to test/fix most of them, they seems to work fine. One of the problems i found is that all tools working with a process are depending on the linux procfs. As for now i am just changing path from /proc to /compat/linux/proc/ and added message about linprocfs requirement to the port. However, for the real use this needs to be patched to use native BSD KVM API. Also most of the network interface related applets will needs to be ported to make network configurable. However, basic functionality works and i been able to create busybox-only jail with working networking tools, shell, etc. Statically contained busybox takes about 4Mb. May be it would be easier to take some of the tools missing from /rescue compared to the the busybox port, will see. Another step to complete is to get init working. After this point it should be possible to get FreeBSD kernel + busybox userland only as a running system. I will try to send all my BSD specific patches to upstream, will see if they would be accepted or not.

Testing in jail:

  1. Install busybox with a STATIC option set
  2. Create directory for the port. I am using /root/test for the example. Create some sub directories: mkdir -p /root/test/dev /root/test/bin. Copy busybox to the jail: cp /usr/local/bin/busybox /root/test/bin/
  3. Create /etc/jail.conf:
testjail {
   path = /root/test;
   mount.devfs;
   host.hostname = testhostname;
   ip4.addr = 192.168.101.113;
   interface = ue0;
   exec.start = "/bin/busybox";
}
  1. Finally run jail using jail -c testjail. You should see ash prompt! You can create all /bin links using busybox itself:
cd /bin
./busybox --list|./busybox xargs -n1 ./busybox ln -s busybox

To use network tools you will also need to create /etc/resolv.conf file with a dns server to use. E.g.

# echo nameserver 8.8.8.8 > /etc/resolv.conf
# wget google.com
Connecting to google.com (172.217.23.206:80)
Connecting to www.google.com (172.217.23.228:80)
index.html           100% |*************************************************| 11897  0:00:00 ETA
#

Comments and suggestions are welcome.

Tagged ,

Finding “orphaned” packages on the FreeBSD

On some of my old FreeBSD systems i found that there are packages installed locally but already removed from the ports. As result – such packages can not be upgraded automatically and may cause security risk and problems on OS upgrades later.

I did not found any pkg command which allows to find such orphanes but found that its very easy to do in a simple one-liner:

pkg info --origin -a | \
awk '{print "ls /usr/ports/"$2 " > /dev/null 2>/dev/null || echo Origin not found: "$1}'| \
sh

Ports collection needs to be installed and updated before running it. Sample output provided below:

$ pkg info --origin -a|awk '{print "ls /usr/ports/"$2 " > /dev/null 2>/dev/null || echo Origin not found: "$1}'|sh
Origin not found: GeoIP-1.6.11
Origin not found: bind99-9.9.11P1_1
Origin not found: cdiff-1.0.3,1
Origin not found: libcheck-0.10.0
Origin not found: p5-Geo-IP-1.51
Origin not found: pecl-intl-3.0.0_11
Origin not found: php56-5.6.32_1
Origin not found: php56-bz2-5.6.32_1
Origin not found: php56-ctype-5.6.32_1
Origin not found: php56-curl-5.6.32_1
Origin not found: php56-dom-5.6.32_1
Origin not found: php56-exif-5.6.32_1
Origin not found: php56-fileinfo-5.6.32_1
Origin not found: php56-filter-5.6.32_1
Origin not found: php56-gd-5.6.32_1
Origin not found: php56-hash-5.6.32_1
Origin not found: php56-iconv-5.6.32_1
Origin not found: php56-json-5.6.32_1
Origin not found: php56-ldap-5.6.32_1
Origin not found: php56-mbstring-5.6.32_1
Origin not found: php56-mcrypt-5.6.32_1
Origin not found: php56-mysql-5.6.32_1
Origin not found: php56-mysqli-5.6.32_1
Origin not found: php56-openssl-5.6.32_1
Origin not found: php56-pdo-5.6.32_1
Origin not found: php56-pdo_mysql-5.6.32_1
Origin not found: php56-posix-5.6.32_1
Origin not found: php56-session-5.6.32_1
Origin not found: php56-simplexml-5.6.32_1
Origin not found: php56-wddx-5.6.32_1
Origin not found: php56-xml-5.6.32_1
Origin not found: php56-xmlreader-5.6.32_1
Origin not found: php56-xmlwriter-5.6.32_1
Origin not found: php56-xsl-5.6.32_1
Origin not found: php56-zip-5.6.32_1
Origin not found: php56-zlib-5.6.32_1
Origin not found: swig13-1.3.40_1

In example above it is clear that php56 needs to be replaces with a recent one, as well as few other packages.

Tagged ,