Git: Overwriting ‘master’ with Another Branch

In many git workflows, all changes to your code should eventually end up in the master branch. You may also have a develop branch which contains code changes that are not ready for production deployment yet.

For some reason or another, you may end up in a situation where your develop has changed so much that you can no longer easily merge it into master. Most of those reasons suggest bad practices, but such a situation may also arise due to changes introduced into your git workflow or deployment process.

One way out of this dilemma is to completely replace master with the current develop. There are two ways to achieve that.

Merge Strategy ‘Ours’

You can use the following commands to merge develop into master using the ‘ours’ merge strategy:

git checkout develop
git merge -s ours master
git checkout master
git merge develop

The resulting master should now contain the contents of your previous develop and ignore all changes in master.

This method’s advantage is that you get a clean merge commit and other developers using those two branches are less likely to experience problems when merging their feature branches.

The downside is that this merge might fail if your develop and master have diverged to a large degree.

Force Pushing

A more brutal alternative is to force push the develop branch under a different name:

git push -f origin develop:master

Using the -f flag, your previous master is completely overwritten with develop, including its history. Warning: this erases all commits from the master branch that are not also in the develop branch.

This solution may be appropriate in your case if you have a small number of other branches and/or other developers. The downside of this approach is that all developers who already have a local copy of the master branch will need to perform a git reset --hard.

Force pushing to the master branch might fail if you use GitLab’s “Protected Branches” feature. You can either make sure your user has proper permissions or disable the protection for a few seconds until your changes are saved.

Get Two Free Compute Units from Oracle Cloud

Oracle is currently offering an always free tier of its compute cloud. In this article, I will show how to register an account, add the free tier units and connect to them via PuTTY on Windows.

What is included?

While the offer is actually free for an unlimited time, the following restrictions apply:

  • 2 database services are included (ATP serverless and ADW) limited to 1 OCPU and up to 20 GB
  • 2 compute services (1 GB RAM and 1/8 OCPU each)
  • a valid credit card and phone number are required for the registration process
  • 250 EUR of cloud credits are also included but must be used in the first 30 days

Getting started

Browse to https://www.oracle.com/cloud/free/ and register an account. After login, you will be welcomed by the following dashboard:

Navigate to “Create a VM instance”, enter a name (or leave the default) and choose your favourite operating system. However, only a few of them are eligible for the free tier:

Before hitting the “Create” button, you need to setup your SSH authentication.

Continue reading

Improve GitLab Pipeline Performance with DAGs

Directed Acyclic Graph (DAG) style dependencies between individual stages in a continuous deployment pipeline allow for a more flexible workflow and better utilize available computational resources.

Imagine a simple pipeline consisting of three jobs:

  1. A syntax check
  2. A code complexity check
  3. Running all unit tests

You may be tempted to group those in two stages: A) Build (consisting of jobs 1 and 2) and B) Test (consisting of the unit tests):

Traditional Sequences

In plain old GitLab pipelines, you would define that stage A needs to execute before stage B and everyone would be happy.

Except if the syntax check is quite fast (let’s assume 30 seconds) while the code complexity check may be very slow (say 4 minutes). Then, the unit tests need to wait a total of max(30 sec, 4 min) = 4 minutes before they can be executed, resulting in an overall slow pipeline:

Continue reading

iPhone Hacks – Should Apple Have Seen It Coming?

In another article I summarized the series of events that lead to a potentially huge number of iOS devices being overtaken by malicious actors. While increasingly more information about these incidents is revealed, one particularly interesting question should be raised: To what extent is Apple to blame?

Fast Reaction

Let’s start with the good news. As Project Zero researcher Ian Beer writes, they have informed Apple about two of the exploits on February 1st, 2019. Apple reacted within six days and released an emergency update (iOS 12.4.1) on February 7th. This short reaction time is exemplary (especially compared to Microsoft – it recently took them more than 90 days to fix a critical Windows vulnerability reported by Project Zero, which resulted in Google disclosing the vulnerability as previously announced).

Sloppy Quality Assurance?

However, this is where Apple’s exemplary behavior ends. Again according to Ian Beer, Project Zero has identified severe mistakes made by Apple that allowed the attackers to circumvent their security. Since Apple declined to comment on the current issue of exploits, his and his colleagues’ views are taken as the only reliable source of knowledge here.

Continue reading

11 Answers to the Latest Apple iOS Exploits

11 Answers to the Latest Apple iOS Exploits

On August 29th 2019, the British security researcher Ian Beer (@i41nbeer) from Project Zero at Google published multiple blog posts about a series of iOS exploits. According to their findings, those exploits have been used to completely take over iOS devices. This article provides focused answers to eleven questions about this series of events.

What is the overall impact of this attack?

If

  • you used an iOS device (iPhone, iPad, …) in the last two years and
  • visited a certain hacked site (more on that later)

your device could have potentially been overtaken by the attacker.

Overtaken means?

Complete access to all your data, including

  • All messages (even encrypted ones, even from WhatsApp and iMessage – of course also unencrypted texts)
  • Contacts
  • Passwords (iOS Keychain)
  • Emails
  • Third-Party Application Data (Facebook, Telegram, Skype, …)
  • Locations (via GPS)

What was the attackers’ goal?

Continue reading

Installing Kali Linux: Fix “Couldn’t mount CD ROM” error

This is going to be a short one. You may be experiencing troubles when installing Kali Linux via an USB flash drive:

Your installation CD-ROM couldn't be mounted. This probably means that the CD-ROM was not in the drive. If so you can insert it and try again.

You may be inclined to waste a few hours following one of the countless articles suggesting to manually open a shell, change the way your USB stick is mounted and try to fix the issue that way.

However, chances are there is a simpler solution in case you are using the popular “LiLi USB Creator” tool on Windows for preparing your flash drives. This solution is: forget LiLi USB Creator and use Win32 Disk Imager instead. Everything will work fine, you can thank me later.

Send JSON objects via POST to Spring Boot Controllers

Creating and persisting business objects using Spring Boot is amazingly easy. Assume you create an API for simple CRUD methods and want to create an entity based on data entered by your frontend users. In the old times, you would probably populate several POST fields in a key-value style and create your business object manually from those fields.

Spring Boot offers an easier solution. As long as your internal data model equals the frontend’s data model, you can use the @Valid On @RequestBody Controller Method Arguments to automatically create the object from a JSON-serialized request and execute the necessary validation logic.

Continue reading

Use Ansible to Deploy Software from git

Imagine you work on an application on a development server for several months until it is time to deploy it to a production system for the first time. Chances are, there are several necessary configuration tasks just waiting to be forgotten: firewall permissions, specific software libraries, file permissions and so on.

Ansible offers a reproducible and automatable way to take care of these configurational changes for you – and the beauty is: it does not depend on a specific Linux flavour and it works both for single-machine deployments and distributed systems.

If you were never wondering why your application exits with an HTTP error until you have noticed that the cache folder did not have the correct permissions, stop reading; if you have never forgotten which libraries you had to apt-get install before the Makefile finally completed without errors, this is not the guide for you. Otherwise, see how a simple 50 line yml file can take care of your deployment challenges.

Continue reading

How to Use JMeter to Performance Test a REST API

Use JMeter to Performance Test a REST API

Performance testing a REST API reveals its runtime behaviour under stress and can be an early indicator of QoS violations in production. Apache JMeter offers a GUI mode where such load tests can be created and their results be analyzed easily.

In this tutorial, I will show you how to test the performance of the FizzBuzz-API written in Rust presented in one of my previous articles. Let’s get started.

Install JMeter

Installing JMeter is very simple once you have Java installed on your machine (there are numerous tutorials to install Java, so I won’t go into detail here).

Once you can run the following command, you can continue with JMeter:

java -version

Download JMeter from this link (select the ZIP archive) and unpack it. In the extracted folder, go to bin/ and execute jmeter.sh (on Linux) or jmeter.bat (on Windows). You should be greeted by the following GUI:

Create a Simple Load Test

We’ll add a Thread Group which represents a group of artificial users interacting with your API. In the left panel, right click on “Test Plan” and select “Add -> Threads (Users) -> Thread Group”:

Enter the following parameters on the right panel under “Thread Properties”:

The number of threads determines how many simulated users should connect to your REST API simultaneously. The ramp-up period defines how long JMeter waits before the next request is started.

For example, if we have 64 users and a ramp-up period of 100 seconds, JMeter would add a delay of 100 / 64 seconds between each user’s first request.

Define the Invoked Action

You now have to instruct JMeter on which action should be performed by the simulated users. First, we define the base URL. Right click on the Thread Group and select “Add -> Config Element -> HTTP Request Defaults”.

For my Rust application, I use the following settings:

The default protocol is HTTP and I use localhost:8000 since this is the default port for a Rocket web application.

Next, define which specific endpoint should be used by adding an HTTP Request via “Add -> Sampler -> HTTP Request”:

And enter the following settings:

Notice that protocol and server name / port are left blank because they were set by the HTTP defaults earlier.

In my example, I’m using a GET request on the endpoint /<count> which controls how many FizzBuzz iterations are computed. By changing this number, the computational load per request can be modified. For example, /5000 means that FizzBuzz is computed for the numbers 1 to 5000.

Generating a Response Time Graph

Since we want to monitor how our API performs, the response time graph is an illustrative way of showing the average response time per request. Add it by right clicking the Thread Group and selecting “Add -> Listener -> Response Time Graph”.

The resulting graph will display how long on average it takes for an API call to produce a result.

Run the Test

Although the GUI is not a reliable way to execute tests and the command-line interface should be used instead, you can quickly verify that your test is working correctly by selecting the play button ().

Before starting the test, make sure your API is running and accepting HTTP requests. For Rust and Rocket, you can either run the application with cargo run or compile it and run the resulting binary with cargo build --release and ./target/release/fizzbuzz.exe.

After pressing the play button to start your test, you can track how many connections have been opened on the upper right corner of the interface (14 of 64 users in this example):

While the test is running, the Rust console window is showing a large number of incoming connections when using cargo run:

Interpreting the Response Time Graph

During and after the test, you can have a look at the response time graph by selecting “Response Time Graph” in the panel on the left and “Display graph” in the panel on the right.

In this example, you can easily see that the average response time for 64 parallel users is about 1.3 seconds on my absolutely non-competitive hardware (after the ramp-up period where increasingly more users are added).

Keep in mind that there are many factors contributing to the fact that this number won’t match your real response time in production, including:

  • the fact that both JMeter and the API run on the same hardware and have almost no network latency
  • the execution of JMeter itself consumes a certain part of the hardware resources
  • in a production environment, a web server such as nginx would be used in front of the API
  • performance optimizations such as caching would be applied

However, a quick measurement such as this can be more than enough when you just want to verify that you can support a given order of magnitude of parallel requests in production.

A REST API in Rust and Rocket in 5 Minutes

Learn How to Create a FizzBuzz Implementation in Rust

FizzBuzz is a classical software developer interview question with the simple goal of writing an application that outputs “Fizz” for numbers divisible by 3, “Buzz” for numbers divisible by 5 and “FizzBuzz” for numbers matching both cases.

In this article, I will show you how to implement FizzBuzz using the Rust programming language and the Rocket framework.

Start by using Rust nightly and setting up a new Rust project using cargo:

rustup default nightly
cargo new fizzbuzz --bin
cd fizzbuzz
rustup update && cargo update

Edit Caro.toml and add the following dependency for Rocket:

[dependencies]
rocket = "0.4.1"

Continue reading