How to Transfer pictures wirelessly for a Sony camera without using Playmemories on MacOS

Sony cameras can transfer photos to PC using a WIFI connection. To implement support on a host they provided software called “Playmemories”. This software was never working very well and eventually was completely abandoned by Sony. After upgrading to macOS Catalina I found that it is not working anymore, without any vendor updates available. So I started to look for alternatives.

How it works

Internally Sony using PTP/IP protocol to transfer files. The device starts sending the UDP packet to the 239.255.255.250:1900 broadcast address when the user choosing the ‘Send to Computer’ option from the camera menu. Software (Playmemories) captures this packet, connects to the camera, and start the sync process.

After the initial network configuration camera needs to be connected to the station using USB to set PTP/IP GUID. This is a one-time operation which is also handled by Playmemories.

OpenSource implementations

I have been able to find 2 working OSS implementations for that:

  • falk0069/sony-pm-alt – using gphoto2 and python wrapper and provides C program to set initial guid on camera.
  • shezi/airmtp – pure Python2 MTP implementation, includes some additional options. Does not provide any way to initialize the camera over USB.

I found that both tools are working fine.

sony-pm-alt

The tool has an initial GUID setter, which using libusb-1.0 to configure GUID on camera. I have been able to compile it on macOS:

clang `pkg-config libusb-1.0 --libs --cflags` sony-guid-setter.c -o sony-guid-setter

However, found that it is useless, as macOS opens USB exclusively when you connecting the camera. To workaround that I compiled this tool in the VirtualBox Linux VM and passed the camera using USB pass-thru functionality. This was working well and I got the camera configured. In my case exact command was sudo ./sony-guid-setter.o 054c:08af -g. This operation needs to be done only once per camera network setup.

To use wireless transfer I installed gphoto2 from brew (brew install gphoto2) and changed PHOTO_DIR in the sony-pm-alt.py to the user folder. I found that this application is working fine and able to transfer photos from the camera.

airmtp

AirMTP is a pure python2 application, so it does not have any external dependencies. Also, it supports many additional sync options, e.g. ability to skip old files, specify extensions to download, etc.

To use it with the camera you need to run it with a command-line like airmtp.py --ipaddress auto --outputdir and start the transfer on the camera.

I found that the performance of both tools is +- the same, so decided to use airmtp as it has no external dependencies and supports more options. To run it on startup I created such plist and places it in the ~/Library/LaunchAgents/airmtp.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>KeepAlive</key>
    <true/>
    <key>Label</key>
    <string>airmtp</string>
    <key>ProgramArguments</key>
    <array>
      <string>/usr/local/airmtp/airmtp.py</string>
      <string>--extlist</string>
      <string>JPG</string>
      <string>--ipaddress</string>
      <string>auto</string>
      <string>--outputdir</string>
      <string>/Users/user/pics</string>
      <string>--ifexists</string>
      <string>skip</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
  </dict>
</plist>

To enable service do launchctl load -w ~/Library/LaunchAgents/airmtp.plist. Additionally, I added python code which adds macOS notification on camera connect and file transfer. I will publish my changes later.

Summary

Thanks to OSS software – i can use camera sync functionality again and this time without vendor lock.

Tagged ,

AWS Client VPN internals

About AWS Client VPN

AWS Client VPN is a managed client-based VPN service provided by AWS. With Client VPN, it is possible to access AWS resources from any location using an OpenVPN-based VPN client.

Recently AWS added ability to use SAML IdP for user authorization and authentication (see image).

aws-saml

However, SAML federation requires you to use proprietary AWS VPN Client which is available only for Windows and macOS. Moreover, the client is a closed source (and likely violates GPL) and very limited.

In this post, I will show how it works under the hood and how to connect to it using the native OpenVPN binary.

What is wrong with “native AWS client”

  • Available only for the macOS and Windows. OpenVPN itself supports pretty every platform which can establish VPN connections.
  • Closed source. In combination with root access, it adds security risks.
  • VERY limited. No hooks, no timeouts, no log export, nothing at all. Just “import profile” and “Connect”. If things are not working – you have to search for the logs in the /tmp folder (??).
  • Client whitelisting only very few OpenVPN options. If you will try to add any non-whitelisted option to the config – the client will fail to start. It is including inactivity timeout settings, scripting, etc.
  • No documentation about the ability to create customized packages with pre-loaded config

Hopefully, AWS would address some of these limitations in the future. Also, I would suggest making it OSS, as the very likely current client is violating GPL.

How “native client” works

I was using Wireshark and LLDB tools to find out how the client really works. How user flow looks like:

  1. On the first run, you are importing a profile to the AWS VPN Client. Client detecting auth-federate keyword in it and saving config in the ~/.config/AWSVPNClient/OpenVpnConfigs. Special auth-federate keyword is removed at this stage.
  2. The user running the AWS VPN Client and using the “connect” menu to connect.
  3. AWS VPN Client opening a web browser and redirects to the SAML IdP page. After the authorization browser shows “Authentication details received, processing details. You may close this window at any time.” message.
  4. The client connects to the gateway and traffic starts to go via VPN.

Now, let’s take a look at what is going on internally.

  1. The wrapper on Mono starting OpenVPN binary (part of the package) and starts the HTTP server at http://127.0.0.1:35001/ address.
  2. Using the OpenVPN management interface it is asking to connect to the provided gateway with the username N/A and password ACS::35001. This (of course) fails with an authentication failure, but as failure reason VPN server sends SAML redirect URL.
  3. Wrapper using this URL and opening it in the browser. If SAML flow succeeds – IdP will redirect the browser with POST data to the http://127.0.0.1:35001/. HTTP POST data contains SAMLResponse field. On this URL mono wrapper capturing it.
  4. Mono wrapper asks OpenVPN to establish connection second time, but now with N/A as username and SAMLResponse + some session data as a password.
  5. AWS VPN server validates them, and if they are looking valid (e.g. signed by corresponding IdP, etc) start the session.

How to connect with OSS OpenVPN to the AWS Client VPN using SAML

I decided to emulate this flow. I started with writing small HTTP server on golang, which listens on 127.0.0.1:35001 and saving SAMLResponse POST form field to the file. The next step was to write a shell wrapper which emulates the activity of the Mono wrapper in the AWS Client. I decided not to use the management interface but to run the OpenVPN binary directly.

Surprisingly I been able to get the connection up, but only with acvc-openvpn binary from the AWS VPN Client.app package. So I decided to build OpenVPN myself to debug why it is not working with OSS binary. After some experiments reason was found:

  • Password length in the OSS OpenVPN is up to 128 bytes. SAML response is ~11Kb. I extended this size but got another problem related to the TLS error.
  • After all, I found that the password block is not fitting into TLS_CHANNEL_BUF_SIZE limit in the OpenVPN, so I had to extend it as well.

My patch is available here. At this point, I was able to connect and use a VPN. Also, it clearly shows that OpenVPN source code was modified, so AWS have to publish it, according to GPL requirements.

TODO

So far my PoC can connect to the VPN Server. After connect, it is working the same way as AWS client. I already tested both TCP and UDP setup, with 443 and 1194 ports. Some things to do (if I will have some time)

  • Make golang wrapper smarter and replace shell wrapper entirely
  • Think how to integrate this with tunnelblick or other OSS UI for the OpenVPN

As usual – patches and contributions are welcome, repository URL is github.com/samm-git/aws-vpn-client.

Tagged , ,

Official MacOS NVME Smart header found

Finally Apple published header of the NVME Smart header (NVMeSMARTLibExternal.h). It was found in the latest XCode update by Harry Mallon, who also provided initial version of the Smartmontools patch to use it. I adopted and fixed this patch, so latest smartmontools version got Log Pages support on macOS. Original Apple header could be found on my gist.

Good news is that my effort to reconstruct API was mostly correct. I also found some functions (GetFieldCounters, GetSystemCounters, GetAlgorithmCounters) not exported officially. Also mystery with non-working GetLogPage function resolved – second parameter is size (as i expected) but in DWORD-s + starting from 0 (e.g. 1 == 2 DWORDS == 64 bits) and there is a strict validation of it.

Tagged , , ,

Migration to Let’s Encrypt V2 API with acmetool

Why acmetool?

A long time ago i migrated from certbot to the acmetool due to its simplicity and much better design. It is still working perfectly, managing many certificates without any headache. The only problem was new (V2) ACME API, which will be mandatory to use starting from July 2020. The development of acmetool is not very active, but at some point, the author provided a new (beta) release with V2 protocol support. The migration process is not documented, so I decided to make this blog post.

How to migrate

I would recommend starting with the backup of the ACME_STATE_DIR directory first. It should be located on /var/lib/acme on Linux and /var/db/acme on the FreeBSD. During migration content of the directory will be changed.

Next thing is to install the new binary. I already updated acmetool FreeBSD port and found that it is also updated in the Debian SID. If your OS does not have it updated yet – binary could be easily build using a recent golang compiler. When the binary upgrade is done – you can run acmetool status and it will show you your existing domains. Now run acmetool quickstart and choose Let's Encrypt (Live v2) server. Continue with configuration. When done – run acmetool status – all your existing domains should use V2 API from now. Last step is to go to the /var/lib/acme/accounts and remove directory started with acme-v01. Run acmetool status again to validate that only the V2 account is now available.

I did it on a number of the Linux and FreeBSD servers and everything went just fine.

Tagged , ,

FreeBSD – automatic services restart using fscd tool

Overview

FreeBSD comes with a rudimentary rc(8) based system which is based on the shell scripts using functions from rc.subr. From one side – it is stable, well documented and backward compatible. From another – it lacks a lot of the features i would expect from the recent init. One of it is automatic service restart in case of service crash or failure. However, there are number of workarounds to do the job, but most of them are not really well integrated with the native system. After all i been able to find fscd tool which do the job very well and is designed to run with the FreeBSD init.

I am currently using this tool on both server and embedded FreeBSD deployments.

Reasons to use (from the author homepage):

  • kqueue() support provides push rather than pulling the applications, reducing system resources;
  • Integration with FreeBSD’s rc and service utilities do not require much overhead for configuration;
  • Other applications may be too bloated or too configuration heavy for some reasons.

Installing and configuring

FSCD tool could be installed from the FreeBSD packages using pkg install fscd command. Source code and documentation could be found on the github.com/bsdtrhodes/freebsd-fscd.

After installation do the following steps:

  1. Create file /usr/local/etc/fscd.conf and list all the services you want to monitor in it (line by line). E.g. in my case it is
    nagios
    syslog-ng
    syslogd
    php-fpm
    nginx
    quagga
    ntpd
    bsnmpd
    fcgiwrap
    exim
    syncthing
    openvpn
    smartd
    
  2. I would suggest to edit /usr/local/etc/rc.d/fscd and add services you are controlling to the REQUIRE section. This is recommended to avoid fscd startup before services it is going to control, and as result – early start of the listed services. In my example line looks like REQUIRE: nagios syslogd php-fpm nginx quagga ntpd bsnmpd fcgiwrap mail syncthing openvpn smartd. Test your changes using rcorder /etc/rc.d/* /usr/local/etc/rc.d/* command.
  3. Enable fscd in the /etc/rc.conf by setting fscd_enable="YES" and start the service.

Testing and using fscd

Start fscd using service fscd start command. Check that fscd is working using fscadm status command:

# fscadm status
The fscd pid is 6327.
process name                             pid
--------------------------------------------------
nagios                                   4988
smartd                                   1458
openvpn                                  1645
syncthing                                1429
exim                                     1435
fcgiwrap                                 3359
bsnmpd                                   3364
ntpd                                     1380
quagga                                   467
nginx                                    1666
php-fpm                                  1635
syslogd                                  1215
syslog-ng                                1206

Kill any of the monitoring services. If command exited with error code – fscd will wait ~1 minute for the graceful restart, if code is different – it will restart it immediately. All actions are logged in the syslog. As a side effect – if you will stop service using service myservice stop fscd will automatically restart it. To avoid that you can temporary disable service in fscd using fscadm disable command. Later service could be re-enabled using fscadm enable command.

Thats it ๐Ÿ™‚ Of course i would like to see such functionality as part of the init system, but such workaround also works well for my usecases.
Continue reading

Tagged

Getting T-mobile CZ Account Balance programmatically

API Last approach

For some reason most of the mobile (and utility) operators never providing API to get the account balance. Usually your choice is JavaScript-only, slow and buggy website and in some cases also mobile app, which will request permissions for everything possible and will show you the ads as a bonus. At least for the SIM card in the phone you usually have USSD option, however, for the IOT devices its not so easy. This small script gets balance for the t-mobile pre-paid card, using web site as a crawler. I decided to share it to save your time if you will need to do the same. I am planning to convert it to the Nagios plugin to check the balance and alert if/when needed.

Ugly PHP code

Using Developer Tool and a lot of time i been able to reproduce logic which grabs the balance. Web site is javascript-only and at some point i was ready to give-up – so complex and non-obvious procedure is :-/ Of course its expected to break at the moment when site will be changed.

<?php

$creds = array(
  'username'=>'account@example.com',
  'password'=>'MyPassWord123'
);

$postvars = '';
foreach($creds as $key=>$value) {
    $postvars .= $key . "=" . $value . "&";
}

// login and fetch auth related cookies
$ch = curl_init('https://www.t-mobile.cz/.gang/login');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $postvars);
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Cookie: gPcookie=1; "));
// Ask for the callback.
curl_setopt($ch, CURLOPT_HEADERFUNCTION, "curlResponseHeaderCallback");
$cookies = Array();
$result = curl_exec($ch);

switch ($http_code = curl_getinfo($ch, CURLINFO_HTTP_CODE)) {
  case 302:  // if user/password matches there is redirects to another page
    break;
  default:
    die('FATAL: Unexpected HTTP code: '. $http_code. ", check username/password\n");
}

foreach($cookies as $v){
  $tmp=explode('=',$v[1],2);
  $cookie_arr[$tmp[0]]=$tmp[1];
}
curl_close($ch);

// validate that we got all required cookies
$fields_req=array("JSESSIONID","gftCookie","gTcookie","gAcookie","gScookie");
foreach ($fields_req as $key) {
  if(!isset($cookie_arr[$key])) {
    die("FATAL: Unable to get cookie: $key\n");
  }
}

// cookies expected on every request
$theader=array("Cookie: AJAXIBLE_JAVASCRIPT_ENABLED=true; gPcookie=1; JSESSIONID=".
  $cookie_arr["JSESSIONID"]."; gftCookie=".$cookie_arr["gftCookie"].
  "; gTcookie=".$cookie_arr["gTcookie"]."; gAcookie=".$cookie_arr["gAcookie"].
  "; gScookie=".$cookie_arr["gScookie"].";");

// getting lazy block id to fetch balance block
$ch = curl_init("https://www.t-mobile.cz/muj-t-mobile/-/module/myTariff?_rcorevccm_WAR_vcc_menu=mainmenu&_rcorevccm_WAR_vcc_menuCode=myTariff");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, $theader);
$result = curl_exec($ch);
if(!preg_match('/ data-lazy-loading="{"rk":"(\d+)"}" id="(\d+)"/', $result, $lazy_ids)){
  die("FATAL: Unable to get lazy block ids\n");
}
curl_close($ch);

// getting actual balance frame
$ch = curl_init('https://www.t-mobile.cz/muj-t-mobile/-/module/myTariff?p_p_id=rcorevccm_WAR_vcc&p_p_lifecycle=0&p_p_state=exclusive&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_rcorevccm_WAR_vcc_moduleCode=myTariff&_rcorevccm_WAR_vcc_lazyLoading=true&_rcorevccm_WAR_vcc_componentIds='.$lazy_ids[1].'.'.$lazy_ids[2]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, $theader);
$result = curl_exec($ch);
curl_close($ch);

// normalize
$result=preg_replace('/\s+/'," ",$result);

if(!preg_match('/Vรฝลกe kreditu<\/div> <\/th> <td class="text-right"> <div class="text-xlarge"> <strong>([\d,]+)<\/strong>/', $result, $matches)){
  die("FATAL: Unable to get balance\n");
}
$balance=(float)str_replace(",",".",$matches[1]);
printf("Balance: %01.2f CZK\n", $balance);

// callback to get cooikes
function curlResponseHeaderCallback($ch, $headerLine) {
    global $cookies;
    if (preg_match('/^Set-Cookie:\s*([^;]*)/mi', $headerLine, $cookie) == 1)
        $cookies[] = $cookie;
    return strlen($headerLine); // Needed by curl
}

?>

P.S. I been able also to grab csv file from the web site with a last statement, however, it seems that data from the “current status” frame is more up to date.

Tagged ,

Fixing USB detection with uboot on RPI

As part of my experiments with RPi/FreeBSD i decided to move FreeBSD completely to the USB token. Some of the pros:

  • Easy to mount on normal laptop if i need to fix something, backup, etc.
  • I have tonn of them ๐Ÿ™‚
  • I think that performance of the recent usb3 flash is better compared to typical SD. However need to verify that.
  • Possibility to use SSD/HDD via USB-Sata bridge

Cons:

  • RPi 1 does not support real usb boot, you still need to have SD Card to initiate boot process

How RPi 1 boots with FreeBSD:

RPi expects FAT partition on SD Card which contains some files to boot. This includes bootcode.bin (removed on RPi4), start*.elf (primary firmware, different versions, reads config.txt file), u-boot (specified as kernel=u-boot.bin in the config.txt) which loads ubldr (arm port of the loader(8)). UBLDR scanning for the UFS partition to load kernel and pass control to it. Diagram of the process provided below:

boot.png

As ubldr does not have hardware specific code it is relying on u-boot API to access the disks. And here problems starts – 2 of my 3 USB drives were showing

starting USB...
USB0:   Core Release: 2.80a
scanning bus 0 for devices... usb_new_device: Cannot read configuration, skipping device XXXX:XXXX

on boot. I decided to debug why it happens.

Debugging u-boot

First thing i decided to do is to build a latest u-boot for the RPI platform. To do this i been using official u-boot git – git://git.denx.de/u-boot.git. I was using docker container with Ubuntu 18.04 to avoid any build issues. You will have to install cpp-arm-linux-gnueabi, gcc-arm-linux-gnueabi and binutils-arm-linux-gnueabi packages. Next is to set cross-compiler using export CROSS_COMPILE=arm-linux-gnueabi- command.

In the u-boot folder we will need to configure u-boot for our board and usecase:

  1. Run make rpi_defconfig. This will select our board.
  2. Using make menuconfig enable CONFIG_API configurable (or just add CONFIG_API=y to .config file)
  3. Run make command. This should create u-boot.bin file which you could copy to the SD Card. I am recommending to use different name, e.g. uboot-new.bin, this would allow to revert to the previous loader. Change kernel line in the config.txt to point to the new u-boot.

At this point i got 2 news: good one that my u-boot is working and bad that it has 100% same USB issue. So i decided to debug it. You can enable USB debugging by adding #define DEBUG in the beginning of the common/usb.c file and re-compile u-boot. This would print a lot of debug lines on initial boot and we have a typical heisenbug after this stage – USB detection is fixed ๐Ÿ™‚ Okay, at least we know that this is a timing issue and probably debug output prevents it to happens.

After number of experiments i was able to get it working without debug. Here is a diff:

diff --git a/common/usb.c b/common/usb.c
index b70f614d24..121df520bc 100644
--- a/common/usb.c
+++ b/common/usb.c
@@ -1086,10 +1086,11 @@ int usb_select_config(struct usb_device *dev)
         * requests in the first microframe, the stick crashes. Wait about
         * one microframe duration here (1mS for USB 1.x , 125uS for USB 2.0).
         */
-       mdelay(1);
+       mdelay(300);

        /* only support for one config for now */
        err = usb_get_configuration_len(dev, 0);
+       mdelay(100);
        if (err >= 0) {
                tmpbuf = (unsigned char *)malloc_cache_aligned(err);
                if (!tmpbuf)
@@ -1107,6 +1108,7 @@ int usb_select_config(struct usb_device *dev)
        usb_parse_config(dev, tmpbuf, 0);
        free(tmpbuf);
        usb_set_maxpacket(dev);
+       mdelay(100);
        /*
         * we set the default configuration here
         * This seems premature. If the driver wants a different configuration

After all it detects USB device and now it is visible (and bootable!) in ubldr.

Bus usb@7e980000: scanning bus usb@7e980000 for devices... 4 USB Device(s) found
       scanning usb for storage devices... 1 Storage Device(s) found

Ubldr will scan all devices and will try to boot from the first bootable ufs parition found. This way i was able to move all the files to the USB flash with only few files left on the SD.

Tagged , , ,

BusyBox on the FreeBSD

As an experiment i decided to play with a BusyBox on a FreeBSD. BusyBox combines tiny versions of many common UNIX utilities into a single small executable.

Some of my goals

  1. Create minimal environment suitable for the embedded use. BusyBox on Linux provides a fairly complete environment for any small or embedded system, so was thinking to try the same on the FreeBSD
  2. Attempt to reduce Rasberry/BSD boot time. My profiling shows that actually userland boot make take same amount of time as kernel, sometime more. I think reason could be BSD RC init, some of the /sbin/init logic, etc. Not really easy to profile as a lot of this tools are not providing timestamp. To do some “poor man profiler” i patched cu tool to show timestamp on every line printed.
  3. Create tiny environment for the FreeBSD Jail. BusyBox could be compiled statically and is commonly used in Docker as a minimal base. Also some of the projects using BusyBox to create custom applets in the embedded world (e.g. RIPE Atlas or Ubiquiti devices)
  4. Self education, to better understand how FreeBSD init and friends works, to compare this with Linux one

Initial state

I was surprised to find that BusyBox is already exists in the FreeBSD Ports. However, version in the port is outdated and does not contain many must-have applets. Moreover – i found that it is crashing on arvmv6 arch (tested on Rasberry Pi 1). However, it was a good start! Also i found that there is some very basic and initial support in the busybox source code, so hopefully author would accept non-Linux patches.

So i decided to update port to the latest version and to fix some issues found. Issue with an arm crash is actually clang/arm problem which i was able to workaround. Also i been able to fix few applets and get more tools working

Current state and future improvements:

I submitted PR to update port in the latest version and to include fixes i have done. Currently such applets are compiled in:

addgroup, ar, arch, ash, awk, base64, basename, bc, bunzip2, bzcat, bzip2, cal, cat, chgrp, chmod, chown, chroot, cksum, clear, cmp, comm, cp, cpio, crontab, cttyhack, cut, dc, dd, delgroup, diff, dirname, dnsd, dos2unix, dpkg, dpkg-deb, du, echo, ed, env, expand, expr, factor, fakeidentd, fallocate, false, fatattr, find, flock, fold, fsync, ftpd, ftpget, ftpput, getopt, grep, groups, gunzip, gzip, hd, head, hexdump, hexedit, hostid, hostname, httpd, id, inetd, install, iostat, ipcalc, kill, killall, killall5, less, link, ln, logger, logname, logread, lpq, lpr, ls, lzcat, lzma, lzop, man, md5sum, microcom, mkdir, mkfifo, mknod, mktemp, more, mpstat, mv, nc, nice, nl, nmeter, nohup, nologin, nuke, od, paste, patch, pgrep, pidof, pipe_progress, pkill, pmap, poweroff, printenv, printf, ps, pscan, pwd, pwdx, readlink, readprofile, realpath, reboot, renice, reset, resize, resume, rev, rm, rmdir, rpm, rpm2cpio, run-parts, scriptreplay, sed, seq, setsid, sh, sha1sum, sha256sum, sha3sum, sha512sum, shred, shuf, sleep, smemcap, sort, split, ssl_client, stat, strings, stty, su, sulogin, sum, svok, sync, syslogd, tail, tar, tee, telnet, telnetd, test, tftp, tftpd, timeout, top, touch, tr, traceroute, traceroute6, true, truncate, tty, ttysize, uname, uncompress, unexpand, uniq, unix2dos, unlink, unlzma, unxz, unzip, usleep, uudecode, uuencode, vi, volname, watch, wc, wget, which, whoami, xargs, xxd, xz, xzcat, yes, zcat

I been able to test/fix most of them, they seems to work fine. One of the problems i found is that all tools working with a process are depending on the linux procfs. As for now i am just changing path from /proc to /compat/linux/proc/ and added message about linprocfs requirement to the port. However, for the real use this needs to be patched to use native BSD KVM API. Also most of the network interface related applets will needs to be ported to make network configurable. However, basic functionality works and i been able to create busybox-only jail with working networking tools, shell, etc. Statically contained busybox takes about 4Mb. May be it would be easier to take some of the tools missing from /rescue compared to the the busybox port, will see. Another step to complete is to get init working. After this point it should be possible to get FreeBSD kernel + busybox userland only as a running system. I will try to send all my BSD specific patches to upstream, will see if they would be accepted or not.

Testing in jail:

  1. Install busybox with a STATIC option set
  2. Create directory for the port. I am using /root/test for the example. Create some sub directories: mkdir -p /root/test/dev /root/test/bin. Copy busybox to the jail: cp /usr/local/bin/busybox /root/test/bin/
  3. Create /etc/jail.conf:
testjail {
   path = /root/test;
   mount.devfs;
   host.hostname = testhostname;
   ip4.addr = 192.168.101.113;
   interface = ue0;
   exec.start = "/bin/busybox";
}
  1. Finally run jail using jail -c testjail. You should see ash prompt! You can create all /bin links using busybox itself:
cd /bin
./busybox --list|./busybox xargs -n1 ./busybox ln -s busybox

To use network tools you will also need to create /etc/resolv.conf file with a dns server to use. E.g.

# echo nameserver 8.8.8.8 > /etc/resolv.conf
# wget google.com
Connecting to google.com (172.217.23.206:80)
Connecting to www.google.com (172.217.23.228:80)
index.html           100% |*************************************************| 11897  0:00:00 ETA
#

Comments and suggestions are welcome.

Tagged ,

Finding “orphaned” packages on the FreeBSD

On some of my old FreeBSD systems i found that there are packages installed locally but already removed from the ports. As result – such packages can not be upgraded automatically and may cause security risk and problems on OS upgrades later.

I did not found any pkg command which allows to find such orphanes but found that its very easy to do in a simple one-liner:

pkg info --origin -a | \
awk '{print "ls /usr/ports/"$2 " > /dev/null 2>/dev/null || echo Origin not found: "$1}'| \
sh

Ports collection needs to be installed and updated before running it. Sample output provided below:

$ pkg info --origin -a|awk '{print "ls /usr/ports/"$2 " > /dev/null 2>/dev/null || echo Origin not found: "$1}'|sh
Origin not found: GeoIP-1.6.11
Origin not found: bind99-9.9.11P1_1
Origin not found: cdiff-1.0.3,1
Origin not found: libcheck-0.10.0
Origin not found: p5-Geo-IP-1.51
Origin not found: pecl-intl-3.0.0_11
Origin not found: php56-5.6.32_1
Origin not found: php56-bz2-5.6.32_1
Origin not found: php56-ctype-5.6.32_1
Origin not found: php56-curl-5.6.32_1
Origin not found: php56-dom-5.6.32_1
Origin not found: php56-exif-5.6.32_1
Origin not found: php56-fileinfo-5.6.32_1
Origin not found: php56-filter-5.6.32_1
Origin not found: php56-gd-5.6.32_1
Origin not found: php56-hash-5.6.32_1
Origin not found: php56-iconv-5.6.32_1
Origin not found: php56-json-5.6.32_1
Origin not found: php56-ldap-5.6.32_1
Origin not found: php56-mbstring-5.6.32_1
Origin not found: php56-mcrypt-5.6.32_1
Origin not found: php56-mysql-5.6.32_1
Origin not found: php56-mysqli-5.6.32_1
Origin not found: php56-openssl-5.6.32_1
Origin not found: php56-pdo-5.6.32_1
Origin not found: php56-pdo_mysql-5.6.32_1
Origin not found: php56-posix-5.6.32_1
Origin not found: php56-session-5.6.32_1
Origin not found: php56-simplexml-5.6.32_1
Origin not found: php56-wddx-5.6.32_1
Origin not found: php56-xml-5.6.32_1
Origin not found: php56-xmlreader-5.6.32_1
Origin not found: php56-xmlwriter-5.6.32_1
Origin not found: php56-xsl-5.6.32_1
Origin not found: php56-zip-5.6.32_1
Origin not found: php56-zlib-5.6.32_1
Origin not found: swig13-1.3.40_1

In example above it is clear that php56 needs to be replaces with a recent one, as well as few other packages.

Tagged ,

How to support old OSX version with a recent xcode.

About

In this post i decided to share my experience about supporting build for old MacOSX versions using recent xcode/clang. I found that this topic is very tricky and documentation is not covering many parts of it.

How things should work

I am currently using OSX 10.14.1 as a build host with a latest Command Line Tools installed (by xcode-select --install). SDK 10.14.1 should support compilation for any OSX starting from 10.9 to the 10.14. We can specify version to use using -mmacosx-version-min= switch which is used by compiler/linker to generate correct binaries from the target system. Sounds brilliant! So lets test this and see why it is not always as good as it could be.

Testing with plain C code

Lets start with a simple code. I will use clock_gettime as an example. This POSIX function was added in OSX 10.12 and frequently used in opensource software. Here is a sample code i will use:

/*
 simple gettime test
 */
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <time.h>
#define BILLION  1000000000L;

int main( int argc, char **argv )
  {
    struct timespec start, stop;
    double accum;

    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
      perror( "clock gettime" );
      exit( EXIT_FAILURE );
    }
    sleep(1);

    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
      perror( "clock gettime" );
      exit( EXIT_FAILURE );
    }

    accum = ( stop.tv_sec - start.tv_sec )
          + ( stop.tv_nsec - start.tv_nsec )
            / BILLION;
    printf( "%lf\n", accum );
    return( EXIT_SUCCESS );
}

So, lets start testing.

  1. clang -Wall -o gettime gettime.c gives no errors and producing working binary. We can test minimal supported version with otool -l gettime|grep -A 4 'LC_BUILD_VERSION' – in our case it is 10.14.
  2. Lets try to restrict version with 10.9. Command clang -mmacosx-version-min=10.9 -Wall -o gettime gettime.c silently producing output and gettime binary! Minimal osx version is now marked as 10.9. That is weird, but nm -a gettime shows reference to _clock_gettime symbol which should not exist on 10.9 at all! As expected, this binary fails if running on OSX 10.11:
        ./gettime
        dyld: lazy symbol binding failed: Symbol not found: _clock_gettime
          Referenced from: /Users/vagrant/./gettime
          Expected in: /usr/lib/libSystem.B.dylib
        dyld: Symbol not found: _clock_gettime
          Referenced from: /Users/vagrant/./gettime
          Expected in: /usr/lib/libSystem.B.dylib
        Trace/BPT trap: 5
    

    So despite the switch clang created broken binary!

  3. It is possible to disable weak imports by using -no_weak_imports linker flag. Lets try it:

    clang -Wl,-no_weak_imports -mmacosx-version-min=10.9 -Wall -o gettime gettime.c
    

    will fail to link with an error:

    ld: weak import of symbol '_clock_gettime' not supported because of option: -no_weak_imports for architecture x86_64.
    

Sounds like a solution! Moreover – we can add -Werror=partial-availability key and if it is used – compilation will fail early, with gettime.c:17:9: error: 'clock_gettime' is only available on macOS 10.12 or newer [-Werror,-Wunguarded-availability] error which is explaining where the problem is. So, looks like we can just add -Werror=partial-availability to C(XX)FLAGS and -Wl,-no_weak_imports to the link flags to solve all the issues?
Not so easy ๐Ÿ˜ฆ

Testing with autoconf or cmake

In the real world software is typically built using some build system, which trying to detect what is available on the host to set defines accordingly. In the opensource world autoconf and cmake are very popular choices. Now lets try to see if we can build some projects with a compatibility flags set to 10.9. As a real-world example i will use XZ Utils which uses autoconf as a build system.

  1. Lets try to build XZ Utils 5.2.4 with default CFLAGS/LDFLAGS. Typical ./configure && make command should provide binary src/xz/.libs/xz with minimal supported OS set to 10.14. But out goal is to compile it for 10.9+, so lets do make clean
  2. Now lets try to set LD/CFLAGS to enforce 10.9 compatibility:
    export CFLAGS="-mmacosx-version-min=10.9 -Werror=partial-availability"
    export LDFLAGS="-Wl,-no_weak_imports"
    

    and re-run ./configure && make. This time build will fail with ../../src/common/mythread.h:250:19: error: '_CLOCK_REALTIME' is only available on macOS 10.12 or newer [-Werror,-Wunguarded-availability] error message.

Looking in the source it is clear that HAVE_CLOCK_GETTIME is set despite the fact that we used correct compiler and linker flags. Why it happens? Due to autoconf detection algorithm. To detect if function is available its trying to compile and link such simple code:

#ifdef __cplusplus
extern "C"
#endif
char clock_gettime ();
int
main ()
{
return clock_gettime ();
  ;
  return 0;
}

As you could see – autoconf includes own function prototype (because it only wants to check that function exists in the system) and in this case all linker/preprocessor version selection logic in clang/osx will just never run because it depends on definitions from the SDK headers.

Lets check this in the command line:

clang -mmacosx-version-min=10.9 -Werror=partial-availability -Wl,-no_weak_imports -o test test.c

will return no errors, but resulted binary will link to the _clock_gettime symbol which will be not available on target OS. I found that cmake checks are behaving exactly the same, resulting with a broken binary or compilation/link error if appropriate flags are set.

Workarounds

If it is your own code which does not have million dependencies – probably easiest way is to just patch autoconf/cmake checks to not prototype this functions but use system headers instead. However, if you have a tonn of dependencies – it would take huge amount of time to patch every failed one. So i found another workaround.

If recent (10.14) SDK is not easy to use in out scenario with autoconf/cmake lets use one from 10.9 ๐Ÿ™‚ I downloaded and unpacked 10.9 SDK to the /Library/Developer/CommandLineTools/SDKs/MacOSX10.9.sdk directory. I found that it is enough to set SDKROOT environment variable to the MacOSX10.9.sdk directory to fix the problem. In this case SDK will not contain any definitions of the non-compatible symbols and autoconf/cmake checks will work as expected, so you will get 10.9+ compatible Mach-O binary in the end of the game.

Summary

It is possible, but not trivial to keep compatibility with an older systems with a recent clang/OSX.

If you know better solution or workaround or just found this post useful – please let me know in the comments.

Tagged , ,