Modifying binaries to replace proprietary APIs

Note This is a follow up post on exploring private apis from late May.

Soon I want to use the Things 3 macOS application with my own API. To achieve this goal I have built a working SDK for the things cloud to understand the structure of the communication between client and server. This time I want to modify my Things 3 binary so it actually talks to an API of my choice. Let’s get started.

We know Things 3 talks to So this must be encoded inside the binary somewhere. To find out where, let’s use strings:

Strings looks for ASCII strings in a binary file or standard input.

$ strings /Applications/  | grep "cloud\."

The empty output tells me it’s not part of the main binary, so it must be part of some dependency. Next, I need to find out which dependencies Things 3 has, which can be done using otool:

The otool command displays specified parts of object files or libraries.

We’re specifically interested in shared libraries which come with the binary:

$ otool -L /Applications/ | grep "@"
  @rpath/FoundationAdditions.framework/Versions/A/FoundationAdditions (compatibility version 0.0.0, current version 0.0.0)
  @rpath/CoreJSON.framework/Versions/A/CoreJSON (compatibility version 0.0.0, current version 0.0.0)
  @rpath/KissXML.framework/Versions/A/KissXML (compatibility version 0.0.0, current version 0.0.0)
  @rpath/Base.framework/Versions/A/Base (compatibility version 0.0.0, current version 0.0.0)
  @rpath/ThingsModel.framework/Versions/A/ThingsModel (compatibility version 0.0.0, current version 0.0.0)
  @rpath/SyncronyCocoa.framework/Versions/A/SyncronyCocoa (compatibility version 0.0.0, current version 0.0.0)
  @rpath/ThingsTools.framework/Versions/A/ThingsTools (compatibility version 0.0.0, current version 0.0.0)
  @rpath/QuartzAdditions.framework/Versions/A/QuartzAdditions (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXOnboardingPopUpKit.framework/Versions/A/TXOnboardingPopUpKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXVisualDebugKit.framework/Versions/A/TXVisualDebugKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXTrialIndicatorKit.framework/Versions/A/TXTrialIndicatorKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXPopUpMenuKit.framework/Versions/A/TXPopUpMenuKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXLinkDetectorKit.framework/Versions/A/TXLinkDetectorKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXListKit.framework/Versions/A/TXListKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXToolTipKit.framework/Versions/A/TXToolTipKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXCloudIndicatorKit.framework/Versions/A/TXCloudIndicatorKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXToolbarKit.framework/Versions/A/TXToolbarKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXTrialExpiredKit.framework/Versions/A/TXTrialExpiredKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXQuickEntryKit.framework/Versions/A/TXQuickEntryKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXDatePickerKit.framework/Versions/A/TXDatePickerKit (compatibility version 0.0.0, current version 0.0.0)
  @rpath/SMStateMachine.framework/Versions/A/SMStateMachine (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXWindowKit.framework/Versions/A/TXWindowKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/HockeySDK.framework/Versions/A/HockeySDK (compatibility version 1.0.0, current version 1.0.0)
  @executable_path/../Frameworks/TXMainWindowKit.framework/Versions/A/TXMainWindowKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXAppKit.framework/Versions/A/TXAppKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXTagKit.framework/Versions/A/TXTagKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXPopoverKit.framework/Versions/A/TXPopoverKit (compatibility version 0.0.0, current version 0.0.0)
  @executable_path/../Frameworks/TXCheckListKit.framework/Versions/A/TXCheckListKit (compatibility version 0.0.0, current version 0.0.0)

Lots of shared libraries, but we know we’re interested in functionality related to the cloud synchronization, so SyncronyCocoa looks like a likely candidate, as strings confirms:

strings /Applications/ | grep "cloud\."

Nice. Next, we need to modify the shared library to use a different domain which, conveniently for development, points to localhost:

cat /etc/hosts | grep cultt cloud.culttcoder.local

Note that it’s important that the domain has the same number of characters as the one we’re trying to replace - if it’s shorter or longer the Things 3 binary will crash on launch.

Now that we have a domain pointing to our local machine we need to patch the SyncronyCocoa.framework. I’ll be using dd to modify the binary. dd in combination with strings is a great tool for making smaller modifications to binary files.

First, we need to find the offset of the string inside the library, using strings:

strings -t d /Applications/ | grep "cloud\."

This tells us that the string is located at offset 36078. Now, we can use dd to change the library at position 36078 so it points to our local domain:

$ printf "https://cloud.culttcoder.local/\x00" > /tmp/api-dns
$ sudo dd if=/tmp/api-dns of=/Applications/ obs=1 seek=36078 conv=notrunc

Let’s verify the patch was successful:

$ strings -t d /Applications/ | grep "cloud\."
36078 https://cloud.culttcoder.local/

Again, nice. Now we’ve successfully patched the library to talk to our local domain, but Things 3 will crash. As it’s an App Store binary everything is code signed, and our patch invalidated the code signature. Let’s fix that:

$ sudo codesign -f -s - /Applications/
/Applications/ replacing existing signature

Alright, Things 3 works again, but we don’t have a compatable API server locally. For now let’s setup a tiny proxy which in turn forwards all requests to the real things cloud API:

As Things 3 requires a valid SSL certificate we need to generate a self signed SSL certificate and import it with ultimate trust into our system key chain for this to work:

$ openssl genrsa -out server.key 2048
$ openssl req -new -x509 -sha256 -key server.key -out server.crt -days 3650 -subj '/CN=cloud.culttcoder.local/O=Private/C=DE'
$ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain server.crt

Note that the certificate must be valid for the domain I chose before. Next up I’ve setup a tiny HTTPS proxy written in Go:

package main

import (

func main() {
  listen := flag.String("listen", ":443", "port to listen on")

  director := func(req *http.Request) {
    req.URL.Scheme = "https"
    req.Host = ""
    req.URL.Host = ""
    req.Header.Set("Connection", "close")

    dump, _ := httputil.DumpRequest(req, true)
    log.Printf("%s\n", string(dump))
  proxy := &httputil.ReverseProxy{Director: director}
  log.Printf("Listening on %s\n", *listen)

  err := http.ListenAndServeTLS(*listen, "server.crt", "server.key", proxy)
  if err != nil {
    log.Fatal("ListenAndServe: ", err)

Now, when we start Things 3 it can talk to the things cloud, just as before, but through our local proxy.

things3 talking to my local thingscloud proy

Soon, it won’t be talking to thingscloud at all…

That’s it for today, happy hacking!

Moving Forward

Everything changes and nothing stands still. Heraclitus

And 2017 will be quite a hectic year for me because of this. I’m looking forward to sharing more non-technical things in the next few months, too. Mostly pictures I guess, but we’ll see what I’ll find interesting enough to share (:

Exploring private HTTPS apis

Today I want to take a look at how you can explore private HTTPS APIs. I’ll be using @culturedcode Things Cloud as an example: it’s the main engine behind keeping Things for iOS and Things for macOS in sync, and as there is no web version available it’s a little more tricky to take a peek behind the scenes.

First off some requirements: you need to be running macOS for this to work, and you need a Things 3 installation along with a thingscloud account.

Now all traffic between Things and Things cloud is exchanged via HTTPS, which means that all you can see is the DNS name the traffic is going to.

The first step should always be to hope for programming mistakes, like maybe the App is not validating the HTTPS certificate at all, or it’s not using certificate pinning. If this was the case one could use a regular proxy to peek at the traffic, like mitmproxy.

$ pip install "mitmproxy==0.18.2"
$ mitmproxy

Now configure your system to route HTTPS traffic through your local proxy:

sudo networksetup -setsecurewebproxy "Wi-Fi" 8080
sudo networksetup -setsecurewebproxystate "Wi-Fi" on

Opening Things however won’t result in any network calls. Luckily culturedcode did a fine job ensuring Things doesn’t just talk to anybody. As we don’t have the real private key to decrypt the traffic we’re done here, right?

One can actually instruct the OS to dynamically load a library at start, using DYLD_INSERT_LIBRARIES, and overwrite the method calls executed when validating SSL certificates. man dyld says:

This is a colon separated list of dynamic libraries to load before the ones specified in the program. This lets you test new modules of existing dynamic shared libraries that are used in flat-names- pace images by loading a temporary dynamic shared library with just the new modules.

At Blackhot 2012 this approach was first demoed - known as SSL Kill Switch 2. This allows us to disable the SSL verification in place.

Assuming you have XCode installed it’s easy to compile the dylib yourself. Once you’re finished you need to first enable SSL KIll Switch:

$ export DYLD_INSERT_LIBRARIES=$(pwd)/SSLKillSwitch.framework/Versions/A/SSLKillSwitch
2017-05-28 23:44:51.952 sh[59630:4497019] === SSL Kill Switch 2: Fishhook hook enabled.
2017-05-28 23:44:52.033 sh[59632:4497025] === SSL Kill Switch 2: Fishhook hook enabled.
2017-05-28 23:44:52.056 tail[59639:4497037] === SSL Kill Switch 2: Fishhook hook enabled.
2017-05-28 23:44:52.076 sed[59635:4497038] === SSL Kill Switch 2: Fishhook hook enabled.

Now, start Things again:

# /Applications/
2017-05-28 23:47:10.755 Things3[59953:4500129] === SSL Kill Switch 2: Fishhook hook enabled.

When you foreground Things you should see API requests in your proxy:

mitproxy /w API requests

Now all that’s left is to use Things, watch which API calls are being made when, and start inferring how the API actually works. As you can see peeking behind HTTPS APIs used by native application is actually very easy.

The results of me playing around with Things are available on Github: things-cloud-sdk. It’s a basic SDK which allows you to read and write to thingscloud; the most work was actually spent guessing what the cryptic payload names are.

I hope this helps the next time you want to peek behind the scenes of a native app & private HTTPS api. Until then - happy hacking!

Awesome AWS CodePipeline CI

After several talks at work about the feasibility of using AWS Codebuild and AWS Codepipeline to verify the integrity of our codebase, I decided to give it a try.

We use pull-requests and branching extensively, so one requirement is that we can dynamically pickup branches other than the master branch. AWS Codepipeline only works on a single branch out of the box, so I decided to use Githubs webhooks, AWS APIGateway and AWS Lambda to dynamically support multiple branches:


First, you create a master AWS CodePipeline, which will serve as a template for all non-master branches.
Next, you setup an AWS APIGateway & an AWS Lambda function which can create and delete AWS CodePipelines based off of the master pipeline.
Lastly, you wire github webhooks to the AWS APIGateway, so that opening a pull request duplicates the master AWS CodePipeline, and closing the pull request deletes it again.

example image of response time percentiles


AWS Lambda

For the AWS Lambda function I decided to use golang & eawsy, as the combination allows for extremely easy lambda function deployments.
The implementation is straight forward and relies on the AWS go sdk to interface with the AWS CodePipeline API.

One catch here is that the AWS IAM permissions need to be setup in a way to allow the lambda function to manage AWS CodePipelines:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "AllowCodePipelineMgmt",
            "Effect": "Allow",
            "Action": [
            "Resource": [

AWS APIGateway

The APIGateway is managed via terraform, and it consists of a single API, where the root resource is wired up to handle webhooks. Github specific headers are transformed so they are accessible in the backend. As Github will call this APIGateway we’ll need to set appropriate Access-Control-Allow-Origin headers, otherwise requests will fail:

resource "aws_api_gateway_integration_response" "webhook" {
  rest_api_id = "${}"
  resource_id = "${}"
  http_method = "${aws_api_gateway_integration.webhooks.http_method}"
  status_code = "200"

  response_templates {
    "application/json" = "$input.path('$')"

  response_parameters = {
    "method.response.header.Content-Type" = "integration.response.header.Content-Type"
    "method.response.header.Access-Control-Allow-Origin" = "'*'"

  selection_pattern = ".*"

AWS CodePipeline

The AWS CodePipeline serving as template is configured to run on master.
This way all merged pull requests trigger tests on this pipeline, and every pull request itself runs on a separate AWS CodePipeline. This is great because every PR can be checked in parallel.

The current implementation forces all AWS CodePipelines to be the same - it would be interesting to adjust this approach e.g. by fetching the CodePipeline template from the repository to allow pull requests to change this as needed.

AWS CodeBuild

In my example the AWS CodeBuild configuration is static. However one could easily make this dynamic, e.g. by placing AWS CodeBuild configuration files inside the repository. This way the PRs could actually test different build configurations.


The approach outlined above works very well. It is reasonable fast and technically brings 100% utilization with it. And it brings great extensibility options to the table: one could easily use this approach to spin up entire per pull-request environments, and tear them down dynamically.
In the future I’m looking forward to working more with this approach, and maybe also abstracting it further for increased reuseability.

The source is available on github.