Pitfalls when Integrating Spring Sessions with Spring Security

Spring Sessions allows you to persist user sessions into a database. Combined with Spring Security, it is easy to create a setup where each logged in user is represented by a row in a SPRING_SESSION table. Externalising sessions into a database is also a prerequisite for a zero-downtime deployment strategy where users can stay logged in while a new application version is deployed.

On paper, the integration of those two Spring components is very easy. Assuming you have a working Spring Security setup, just add this dependency:

<dependency>
    <groupId>org.springframework.session</groupId>
    <artifactId>spring-session-jdbc</artifactId>
</dependency>

and add this line to your application.properties:

spring.session.store-type=jdbc

and you should be good to go!

Value too long for column PRINCIPAL_NAME

In my case, the setup was unfortunately not working out of the box. After login, I noticed the following exception:

org.h2.jdbc.JdbcSQLDataException: Value too long for column """PRINCIPAL_NAME"" VARCHAR(100)": "STRINGDECODE('Customer{id=123, email=''email@example.org'', firstName=''Jane'', lastName=''Doe'', company='... (179)"; SQL statement:
UPDATE SPRING_SESSION SET SESSION_ID = ?, LAST_ACCESS_TIME = ?, MAX_INACTIVE_INTERVAL = ?, EXPIRY_TIME = ?, PRINCIPAL_NAME = ? WHERE PRIMARY_ID = ? [22001-199]

As stated in the exception, the data format of one of the newly created database tables did not expect a PRINCIPAL_NAME to exceed 100 characters. The principal in Spring Security is your user object (e.g., a class named “User” or something similar that is used in the login process). Therefore, this error means that there is some problem when Spring Security tries to create the user session in the database.

Continue reading

Environment-Dependent Code Execution with Spring Boot and Docker

Different deployment environments (e.g., development, testing, staging and production) often require that different pieces of code are switched on and off. For example, you only want to send real emails in production and write to an email log in all other environments instead. Or your code to initialise the database should only run in the staging environment.

A simple but hard to maintain way for such conditional code executions is to define a globally available variable such as isProductionSystem and write code such as this:

if (isProductionSystem) {
  sendMail(...);
} else {
  logMail(...);
}

This is hard to maintain because you end up with large if statements if you have multiple environments. Also, the only way you can adapt in your environment is to change the isProductionSystem variable, which means you either have all of the “production-only” features or none of them.

The @ConditionalOnProperty Annotation

Spring offers a much better way to handle conditional code execution. You can annotate the beans that should only execute depending on an environment flag with the ConditionalOnProperty annotation:

@Bean
@ConditionalOnProperty(value = "emails.enabled")
public EmailJob emailJob() {
    return new EmailJob();
}

In the above example, the email job is only started when the property-value emails.enabled is set to true.

Profile-dependent execution

The code execution can also be dependent on the application profile:

@Profile("prod")
@Bean
public emailJob emailJob() {
    return new EmailJob();
}

As already mentioned, this removes some level of flexibility because you can no longer decide to activate or deactivate individual features at configuration time on the respective environment.

Overriding Configuration Flags in Docker

When using Docker, you can override the default values for the properties by using environment variables with the -e flag of the docker run command:

docker run -it -e SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.PostgreSQLDialect myimage:latest

Spring translates environment variables into the dot-syntax for property-names by substituting dots with underscores and capitalising everything. If you want to override the value of my.property, you need to set the environment variable MY_PROPERTY.

In the above example, you can control whether to actually send emails with the following command:

docker -it -e EMAILS_ENABLED=1 myapp:latest

Conditional Execution on Application Startup

There is another interesting use case for an overridable configuration switch: code that is executed on application startup. For example, a method that creates test instances in an empty database in the staging system but is skipped in production.

This is more verbose but is still readable. Simply extract the configuration value with the @Value annotation (with the default false) into a private variable and check it in a method annotated with the @EventListener annotation listening to the ApplicationReadyEvent:

@Service
public class PostStartService {

    @Value("${init.empty.db:false}")
    private boolean initEmptyDb;

    @EventListener(ApplicationReadyEvent.class)
    public void createTestData() {
        if (!initEmptyDb) {
            return;
        }

        // execute your code here
    }
}

In the above code snippet, you can control whether the body of createTestData is executed via setting the following environment variable in Docker:

docker -it -e INIT_EMPTY_DB=1 myapp:latest

Conclusion

In this article, I presented several techniques to run code depending on the deployment environment. Let me know how what works best for you or if you have different ways of achieving the correct environment configuration.

A More Powerful macOS Alternative to Outlook Quick Parts – Boosting Email Productivity

Stop repeating yourself.

Emails and messages in general often consist of repeated content blocks that never change. Greetings and signatures are perfect examples: some people even try to eliminate such “noise” from messages (for example, the “nohello”-movement).

While it is unrealistic that such changes to the human interaction protocol will ever be universally accepted, there is something else a productivity-minded person can do in order to lose less time each day typing the same phrases over and over again.

Outlook Quick Parts

The idea is simple. Instead of typing a long phrase (e.g., your signature containing your name, phone number and email address), you create a building block with a certain name. Quick parts then expands this name into the full description:

Screenshot from https://www.youtube.com/watch?v=h_JVTjhwsy4

For example, you set your name to “byelong”, press F3 and let Outlook expand the name to a signature:

Best regards,
Bernhard Knasmüller
www.knasmueller.net

Quick parts can also contain formatting and images and are an easy way to manage reusable content in Outlook.

A More Powerful macOS Alternative

Continue reading

Measure Code Execution Time Accurately in Python

Measuring code execution times is hard. Learn how to eliminate systematic and random measurement errors and obtain more reliable results.

We often need to measure how long a specific part of code takes to execute. Unfortunately, simply measuring the system time before and after a function call is not very robust and susceptible to systematic and random measurement errors. This is especially true for measuring very short intervals (< 100 milliseconds).

Systematic and Random Errors

So what is wrong with the following way of measuring?

time_start = time.perf_counter()
my_function()
time_end = time.perf_counter()
execution_time = time_end - time_start

First, there is a systematic error: by invoking time.perf_counter(), an unknown amount of time is added to the execution time of my_function(). How much time? This depends on the OS, the particular implementation and other uncontrollable factors.

Second, there is a random error: the execution time of the call to my_function() will vary to a certain degree.

We can combat the random error by just performing multiple measurements and taking the average of those. However, it is much more challenging to remove the systematic error.

Continue reading

Thymeleaf: Output Size of List

Question: In my Thymeleaf template, I want to output the size of a list of objects. For example, for the list

List<String> list = Arrays.asList("Go", "Python", "Scala");

the result should be 3. How is this possible using just Thymeleaf syntax?

You can use the following expression:

<span th:text="${#lists.size(myList)}">[Number of Elements]</span>

And just in case you want to check if the list is empty, there is also an easy method for that:

<span th:if="${#lists.isEmpty(myList)}">This list is empty</span>

Get Hosted Jira and Confluence for Free

Atlassian’s collaboration tools Jira and Confluence are one of the most popular tools for managing software projects and group knowledge. Learn how to get access to these great tools for free.

For the longest time, Atlassian charged 10$ per month per application in their “Standard” tier. While this is basically nothing for an established company, it is still discouraging for start ups and non-profits since it sums up to a bill of 240$ per year for the popular combination Jira + Confluence.

However, Atlassian changed their strategy. Appearently, they are no longer interested in startups getting hooked on free alternatives. There is now a “Free” tier for up to 10 users with a slightly reduced feature set available for both Jira and Confluence:

Continue reading

GitLab: Authenticate Using Access Token

GitLab offers to create personal access tokens to authenticate against Git over HTTPS. Using these tokens is a secure alternative to storing your GitLab password on a machine that needs access to your repository. It is also the only way to automate repository access when two-factor authentication is enabled.

However, GitLab does a poor job documenting how you actually use these tokens.

Create an Access Token

Navigate to “User Settings” > “Personal Access Tokens” and enter a name and, optionally, an expiration date:

Read and write access to the repository should be sufficient for many use cases, but you can also pick additional scopes. Create and copy the token and save it at a secure location (ideally, in your password manager).

Using the Token at the CLI

This is the crucial piece of information missing in the documentation at this time: You can use the token as the password for the fictional “oauth2” user in CLI commands.

For example, to clone your repository:

git clone https://oauth2:1AbCDeF_g2HIJKLMNOPqr@gitlab.com/yourusername/project.git project

Configure the Token for an Existing Repository

The authentication method of an existing checked out git project is defined in the .git/config file. If you want to use an access token instead of SSH or HTTPS auth for such an existing project, adapt this file the following way:

...
[remote "origin"]
        url = https://oauth2:1AbCDeF_g2HIJKLMNOPqr@gitlab.com/yourusername/project.git
        fetch = +refs/heads/*:refs/remotes/origin/*
...

 

Git: Overwriting ‘master’ with Another Branch

In many git workflows, all changes to your code should eventually end up in the master branch. You may also have a develop branch which contains code changes that are not ready for production deployment yet.

For some reason or another, you may end up in a situation where your develop has changed so much that you can no longer easily merge it into master. Most of those reasons suggest bad practices, but such a situation may also arise due to changes introduced into your git workflow or deployment process.

One way out of this dilemma is to completely replace master with the current develop. There are two ways to achieve that.

Merge Strategy ‘Ours’

You can use the following commands to merge develop into master using the ‘ours’ merge strategy:

git checkout develop
git merge -s ours master
git checkout master
git merge develop

The resulting master should now contain the contents of your previous develop and ignore all changes in master.

This method’s advantage is that you get a clean merge commit and other developers using those two branches are less likely to experience problems when merging their feature branches.

The downside is that this merge might fail if your develop and master have diverged to a large degree.

Force Pushing

A more brutal alternative is to force push the develop branch under a different name:

git push -f origin develop:master

Using the -f flag, your previous master is completely overwritten with develop, including its history. Warning: this erases all commits from the master branch that are not also in the develop branch.

This solution may be appropriate in your case if you have a small number of other branches and/or other developers. The downside of this approach is that all developers who already have a local copy of the master branch will need to perform a git reset --hard.

Force pushing to the master branch might fail if you use GitLab’s “Protected Branches” feature. You can either make sure your user has proper permissions or disable the protection for a few seconds until your changes are saved.

Get Two Free Compute Units from Oracle Cloud

Oracle is currently offering an always free tier of its compute cloud. In this article, I will show how to register an account, add the free tier units and connect to them via PuTTY on Windows.

What is included?

While the offer is actually free for an unlimited time, the following restrictions apply:

  • 2 database services are included (ATP serverless and ADW) limited to 1 OCPU and up to 20 GB
  • 2 compute services (1 GB RAM and 1/8 OCPU each)
  • a valid credit card and phone number are required for the registration process
  • 250 EUR of cloud credits are also included but must be used in the first 30 days

Getting started

Browse to https://www.oracle.com/cloud/free/ and register an account. After login, you will be welcomed by the following dashboard:

Navigate to “Create a VM instance”, enter a name (or leave the default) and choose your favourite operating system. However, only a few of them are eligible for the free tier:

Before hitting the “Create” button, you need to setup your SSH authentication.

Continue reading

Improve GitLab Pipeline Performance with DAGs

Directed Acyclic Graph (DAG) style dependencies between individual stages in a continuous deployment pipeline allow for a more flexible workflow and better utilize available computational resources.

Imagine a simple pipeline consisting of three jobs:

  1. A syntax check
  2. A code complexity check
  3. Running all unit tests

You may be tempted to group those in two stages: A) Build (consisting of jobs 1 and 2) and B) Test (consisting of the unit tests):

Traditional Sequences

In plain old GitLab pipelines, you would define that stage A needs to execute before stage B and everyone would be happy.

Except if the syntax check is quite fast (let’s assume 30 seconds) while the code complexity check may be very slow (say 4 minutes). Then, the unit tests need to wait a total of max(30 sec, 4 min) = 4 minutes before they can be executed, resulting in an overall slow pipeline:

Continue reading