Assertfail

Small and simple business applications using SQL

03 Apr 2022

Say that you want to write a minimal API as seen in Can .net compete with NodeJS. How would you structure such an application? You would end up with something like 0-1 layers in your application.

Some of my previous coworkers embrace these patterns in order to focus on business value. In this post I’ll try to get into how you could make such a system without going into detail (pointing instead to examples).

Embrace SQL

One way to write simple API’s is to embrace SQL when you feel that it’s appropriate. See for instance It’s Time To Get Over That Stored Procedure Aversion You Have.

If you have need a low ceremony application that does not need much in the way of server side logic, there is no need to over-engineer the application.

There is a sample in NodeJs that is also ported to TypeScript. Just make sure to keep things parameterized.

SQL and Document databases

When you read a big object graph and store the same structure in one go, your application logic can be significantly simplified if you store that structure in one go without having to normalize the structure into a SQL view of the world. You could also view PostgreSQL as a DocumentDB, so there might not need to be a hard divide between the SQL and the DocumentDB view of the world.

Note for instance the tutorial from MongoDB where the JSON is as follows:

{
  "first_name": "Paul",
  "surname":"Miller",
  "cell": "447557505611",
  "city":"Lenden",
  "location": [45.123,47.2321],
  "profession": ["banking","finance","trader"],
  "cars":[
    {
      "model":"Bentley",
      "year": 1973,
      "value": 100000
    },
    {
      "model":"Rolls Royce",
      "year": 1965,
      "value": 338080
    }]
}

A decomposition into SQL could be:

create table customers(
  customer_id int identity(1,1),
  first_name nvarchar(255),
  surname nvarchar(255),
  cell nvarchar(100),
  city nvarchar(50),
  location POINT,
  primary key (customer_id));

create table professions(
  profession_id int identity(1,1),
  description nvarchar(255),
  primary key (profession_id)
);
create table customer_professions(
  profession_id int not null foreign key references professions (profession_id),
  customer_id int not null foreign key references customers(customer_id),
  primary key (customer_id,profession_id),
);

insert into customers (first_name,surname,cell,city,
                       location)
               values ('Paul','Miller','447557505611','Lenden',
                       POINT(45.123,47.2321));
insert into professions (description) values ('banking'),('finance'),('trader')
insert into customer_professions (profession_id, customer_id)
select p.profession_id, c.customer_id
from professions p where p.description in ('banking','finance','trader')
from customer c where c.first_name = 'Paul' and surname = 'Miller' ... -- assuming only one such customer ...

The structured nature of SQL also implies that you would need a bit more code in order to deal with the data.

Both Postgresql and Microsoft SQL Server support the use JSON. You can write views on top of JSON information in order to select the information. For migrating the schema you could take a note from RavenDB.

Testing it

When you write tests for a system that does not shy away from using the database as its core:

Schema migrations

Being more structured when it comes to migrations will (from what I’ve seen) pay off, since it means that you can offload work to computers such as automated deployment, testing and the act of keeping databases in sync.

Some of the migration frameworks I’ve used (note that all of them allow for raw SQL migrations in some way or another):

In the .net world:

In the Ruby world:

Abstractions and schema

Keeping fewer layers means that there are fewer abstractions that people need to learn. This can help make the development more accessible to newbies. Having 200+ tables does not make for a pleasant experience going into a system. I’ve spent a lot of time at work to make sense out of such schemas. In many SQL implementations you can have multiple schemas in the same database in order to make things easier to understand.

The dreaded Monolith

In a sense many of the systems that you start to dread start out as small systems. Adding small parts to a system without looking for ways to decompose the system into smaller systems is how you end up with such a mess. There is no need to over-engineer your system until the pain of maintaining the system gets impacted (though note the boiling frog metaphor when it comes to the gradual increase in technical debt).

Conclusion

There are some security implications of embracing SQL, but those can be managed. As long as you keep the system smallish you can have nice enough time for developer written tests. There are as always a challenge around knowing when something has grown too big.

Tags


Comments

Do you want to send a comment or give me a hint about any issues with a blog post: Open up an issue on GitHub.

Do you want to fix an error or add a comment published on the blog? You can do a fork of this post and do a pull request on github.