Support me & my projects on Ko-fi ☕️


Introducing, after some months of delay, a new update of Bicis. It comes with little updates on the outside bug huge changes inside. I am so happy to add two new cities to the predictions platform and expand Neural Bikes to the world. I hope you can find this little big project as useful as I do and love it as much as I love spending time working on it.

What's new?

  • Rewrote the app from the ground up, previous version was not very clean and kept me from releasing constant updates.
  • New icon, that's always a good thing
  • Ditched a half baked MVC implementation for MVVM, Coordinator Pattern & used ReactiveSwift. It might be a bit overkill but it's what I've learnt recently and wanted to use it.
  • Worked a lot server-side. The predictions have considerately improved and that took a lot of time.
  • Added Madrid and New York City! These two are two incredibly big services, each one with 200+ and 900+ bike stations. That presented a big bump from the 40 in Bilbao. I had to change some things in the map rendering to not damage the performance.
  • I previously used CloudKit to serve the predictions. The app would access a public iCloud container with data for the app. I now have a public API that serves all of this.
  • Created some beautiful screenshots for the App Store using fastlane.
  • I have been debating myself whether staying with ads in the app or not. One part of me would like not to use them but I've spent way too much time on this and I'd like to see if I can shave some cents every month to keep me motivated. The web app of Neural Bikes has a different approach and presents a donation button to my Ko-Fi page. Only one individual donated 3€ (yay) and that made me crazy happy when I saw the email. Those are two different models I am trying and I would like to see where they both go.

Bicis version 2020.2


I've been trying to improve predictions in Neural Bikes for a while with no luck. Tried reading a lot on tweaking the hyperparameters and designing good LSTM networks. Adding & removing layers, tweaking the number of neurons, adding a Dropout layer and changing the activation function. The differences were minimal and not noticeable. Not even the training curves led to an improvement.

I tried to do some feature engineering. Adding the season of the year. Performed K-means clustering analysis to identify usage patterns in the data. This last one actually did something.

There are a couple more columns that I could add to improve accuracy. They are weather related variables like rain, wind or temperature. My main issue with this is when the model is on production I will need a trusty source of data every day to make predictions. The moment that data source fails or disappears I won't be able to make predictions. At the moment this is not a solution for me nor I feel safe making this compromise.

What I hadn't tried was creating a new framing of the data. When transforming my dataset to a supervised problem I was only using one day of previous observations to predict a day of availability of output. That was not enough. If the previous day had rained and the recorded availability was low the predicted availability for next day will also be low. This is not true at all and it messed with my system.

The final tweak I did was to give more lag observations in the input. Now the predictions are running on three days of previous availability to predict one day of availability. The results have greatly improved from the previous iteration. The only drawback this option has is the increased training time and data processing exponentially. When using only a previous day the data preparation process creates 1008 columns in the supervised learning part, now with three previous days there are 2736 columns. Almost multiplying by three the columns used, hence the increased training time.

This new neural network has been running for almost a month and it can be visually easy to see the improvements. Bilbao is the main benefited city as it has over a year of training data whereas Madrid and New York have only six and three months.

New predictions

  • Ran 1900 kilometers, ran my fourth marathon helping a friend run her first but got injured twice
  • Read 20 books, 8 more than last year
  • Attended my first iOS conference (NSSpain) as a volunteer and met new people
  • Shipped Neural Bikes website, deployed into production a neural network and added two more cities (Madrid & New York)
  • Traveled to see my running idol, Eliud Kipchoge, break two hours on the marathon
  • Started freelancing as an iOS developer
  • Classes of my Telecommunications Engineering MSc are due in January and graduating in July
  • Started applying for jobs and did two phone interviews
  • I need to work less, side projects are still work

I realized some time ago that Neural Bikes wasn't scalable. This wasn't a problem as it only had Bilbao's bike sharing service with only 40 stations. As I added Madrid with more than two hundred it suddenly became a problem.

The rendering of the map went like this. When a request came for* the Node.JS backend did get all the stations name & coordinate for that city, parse the avaialbility and predictions from my own API for all the stations and then render the map with all the data. Yes, I downloaded all the predictions and availability for all the stations. For the 40 stations in Bilbao, the 200 in Madrid and the 900+ in New York. Now you can see the problem.

When I shipped Madrid the loading times increased from around two seconds for Bilbao to up to ten. I didn't even stop to think if I had used the correct data structures bit I should have done that long time ago.

The solution? Refresh some concepts on Big O Notation to simplify things a bit and modify some data structures.

Summing it all up I was using arrays to store the data instead of a simple key-value system like a dictionary that freed me of iterating through all the data.

As of now all of the available cities (Bilbao, Madrid & New York) load almost immediately and the prediction data is loaded on demand when tapping on a station. Loading all the data was a big waste of time as for example in New York with over 900 stations no one would ever check all of the stations.

Having said that New York is now available on Neural Bikes!


Neural Bikes is now available for BiciMAD. The training of the neural network was a big step from Bilbao. Bilbao has 40 bike docks, whereas Madrid has a bit over 200.

I've wanting to monetize this project for a while. I love working on it but it takes some of my free time away. Also, I am trying to move the server load from my old 2011 Mac Mini in my room to the cloud and that will mean recurring costs. I am adding a Ko-Fi button to donate money if the project is useful to you. Let's see if it works!

This release also adds new endpoints for the API,

For a long time I wanted to make all the data that Neural Bikes uses. The time has come. The Neural Bikes project is a neural network based system to predict bike sharing availability based on previous data.

To make the data I use in my app available to the public I've made an API. It lets the user query two endpoints and get the prediction values for the day and the accumulated daily availability,

If you want to be a bit more specific and only query for a single station it's also possible for both endpoints,

To query for individual stations use the name without the prepended id, for stations that have spaces use the percent encoding, SAN%20PEDRO.

Currently the API is only available for Bilbao, the plan is to release more cities before the year's end after training and testing the models.

The data, predictions and API are a project of mine. This is not related in any way to Bilbao's council nor the service supplier.


The Network Extension framework is one of the most customizable frameworks that Apple provides. Allowing you to customize and extend the core networking features of iOS and macOS.

While this article by Alexander Grebenyuk covers this topic in depth I would like to add some things I have learnt.

I have recently worked on a project implementing a VPN using the OpenVPN protocol. This is not supported natively by the networking framework and requires a third party library, like OpenVPNAdapter.

The WWDC sessions from 2015, 2017 Part 1 and Part 2. Are specially useful to grasp essential points and know how this framework is used.

Using the Packet Tunnel Extension to Implement a VPN

Imagine you want to tunnel all your outgoing traffic using the VPN connection. All your traffic will be sent to your VPN server, that is also your DNS server. This solution can be used to block certain websites, like gambling or adult content and to access an internal company website.

This can be done in iOS using a Packet Tunnel and for this to be published to the App Store it couldn't be done with frameworks that require managed devices.

For unmanaged devices there are some things at our hand. Like playing around with the on demand rules. This allows the system to automatically start a VPN connection based on different rules. In this case forcing the system to establish a tunnel whenever it acquires internet connectivity, Cellular or WiFi.

OpenVPN has implemented their own solution for an always on VPN tunnel, called seamless tunnel. The implementation is not public but it seems to work for them.

Implementing a Packet Tunnel Network Extension will divide the app into two targets. Your main app where your app will reside and the target that subclasses NEPacketTunnelProvider. Subclassing this class will grant us access to a virtual network interface. Creating a packet tunnel provider requires to configure the Info.plist file.

The main app will only be in charge of doing tasks like configuring de VPN profile into the device Settings app. And the target will be doing all the networking operations: starting, stopping and managing all the states the tunnel could be in.

As there is not a lot of documentation or projects here are some of the things that explain how this framework works and how you can manage to build an always on VPN tunnel on iOS devices.

  • As long as your device is charging the tunnel is kept alive. The KEEPALIVE_TIMEOUT messages are used to sense the other side of the tunnel and make sure it's up/down. If your device is unplugged the tunnel could have died and your traffic will be going out using another interface.
  • Higher level APIs like URLSession do not redirect traffic through the interface in the extension. Lower level APIs like createTCPConnectionThroughTunnel can force traffic to go out of the tunnel.
  • Some seconds after locking your screen the device goes to sleep. Override the sleep and wake methods. In the first one you should quiesce the tunnel as appropiate and with the latter reactivate the tunnel. More info here.
  • Not 100% sure but if the device is charging the tunnel will never go to sleep.
  • Network interface changes can be monitored by subscribing to defaultPath, it's not a good idea to base your reconnecting logic on network changes. The standard approach is to monitor the path associated with your tunnel connection for: the path failing or a new tunnel connection and transition to that. Source.
  • Understand your underlying tunnel infraestructure. That is, know if an interface change will destroy your tunnel connection or if updating the tunnel settings will affect your connection. Source.
  • Code inside the network extension is subject to different rules. A query using URLSession done from the network extension doesn't leave the tunnel using the tun interface but this done from the main target the petition leaves the device using the tun interface.
  • Detecting if the tunnel is active is not an easy task. Using the OpenVPNAdapter library there are some properties that are nil unless the tunnel is connected like sessionName. I've checked if this value is nil after waking up and in cases where the tunnel was not active the returning value wasn't nil so I am not completely sold on this working perfectly.
  • Read the logs that come from the VPN server. Some of them could indicate tha status your tunnel is in, like the KEEPALIVE_TIMEOUT messages. I've opened an issue in the OpenVPNAdapter library to treat this messages as errors instead.
  • There are some cases where a lot of rapid reconnections and changes will be fired and you will want to debounce your actions in this transient event until the connection is stable.

When I started developing iOS apps I did not know a single thing about backend, servers... I jumped straight into this hoping to learn on the way.

When I released my first app, Bicis, it did little to nothing. It parsed a XML file from a website and showed pins on a map. Those pins represented bike sharing stations available in Bilbao. When a pin was tapped it would show the availability.

As I continued to learn and wanted to do implement more things. Some times it would be hard to get a bike it the station was on a busy zone. I thought I could add some kind of added value, I tried to show an estimate of the expected availability.

It was a simple graph that made an average of the availability. It was a simple system that used a Raspberry Pi in my room that made some calculations on a MySQL database every 30 minutes using Python and a crontab. My home server was not publicly available and I couldn't guarantee full time availability (not that anyone would notice). With every Apple Developer you can use iCLoud as a backend server and store data there. Uploading the data from my Raspberry Pi only required making use of Apple's CloudKit.JS. It lacks of documentation makes it a bit hard to start using but it does a pretty good job if you don't have a server to use.

The app now has grown from a simple Swift app to a Swift app with a backend that uploads data to Apple's servers.

Some time later I realized that an average value is not as useful for the user. I decided to jump in and up the game.

I started reading and learning how to perform Machine Learning predictions to predict the availability. I upgraded my server setup to an old Mac Mini server because of the new requirements. After some months of learning (and exams) I shipped the update so I had the complete iOS app with a Machine Learning backend.

Open the app and get the map with all the stations and for each one a graph with the expected availability and a history of that day's availability. How does this work as of now in production?

At midnight every day a script is called, it queries InfluxDB, a time series database, and calls a script that makes the prediction using the previously trained model. After that a Node.JS script uploads the predictions to a database in iCloud. After that, every ten minutes a cron job is run on my server that gathers data from the availability feed and saves it to a time series database, InfluxDB. After the data has been added it's uploaded to iCloud using CloudKit.JS so the users have the historic availability of each station to compare with the prediction.

Every month my neural network is retrained. All this project is being documented over at Neural Bikes.