Written 25 July 2023 · Last edited 28 March 2026 ~ 10 min read
The dungeon crawler I never shipped, and the API I did
Benjamin Clark
I had a plan. I was going to build a text-based roguelite for the terminal — procedurally generated dungeons, enemies with names, the kind of thing you could play in a lunch break. Goblins, orcs, trolls. Skeletons. That sort of thing.
Each monster class was going to have its own naming theme. Goblins would have very British, very working-class names. Goatmen — and I genuinely cannot remember why goatmen were a thing, but they were — would have something more whimsical. Then life got in the way, as it tends to, and the game never shipped.
But the monster name generator did. From the ashes of an abandoned side project came a fully functional REST API, complete with behavioural tests, a CircleCI pipeline, and a production deployment I am mildly embarrassed to describe. This is the story of how that happened, what I’d do differently, and why the test suite is the one part that aged well.
This post was rewritten in March 2026. The code it describes is a hobby project from around 2020 — six years on, I find parts of it deeply embarrassing. The architecture is overbuilt for the traffic, the auth is naive, and I’ve long since replaced the whole thing. It still earns a write-up because it was a genuine learning exercise at the time, and the glaring gaps and mistakes - compared to the quality code I produce code - somewhat show “character growth” I suppose.
What the monsters needed
Each monster class had different naming conventions. Goblins needed a first name and a last name — “Fat Jonny Punching” was the vibe I was going for. Goatmen only needed a first name. Same for ogres. Orcs, skeletons, and trolls got the full first-and-last treatment.
The requirements were straightforward:
- GET requests return a random name for a given monster class
- POST requests add new names, protected by an API key
- Sensible error messages for invalid requests, unknown keys, or duplicate records
- The whole thing runs in a Docker container
That last requirement was partly pragmatism and partly habit from my SysAdmin background. If it runs in a container, it runs the same everywhere.
A database without the SQL
I knew from the start that I didn’t want to hand-craft SQL. Not because I can’t — I can — but because SQL injection is unpleasant and maintaining raw queries alongside Python code is more overhead than the project warranted.
So I went looking for an ORM. I found PeeWee. I used PeeWee. I did not, at the time, know that SQLAlchemy existed. I’m not particularly embarrassed about this — PeeWee is lightweight, Flask-friendly, and has a syntax that’s immediately readable. For a project of this scale, it was absolutely fine.
A model looks like this:
# src/database/models.py
db = MySQLDatabase(dbVars.dbName, host=dbVars.dbHost, port=3306, user=dbVars.dbUser, passwd=dbVars.dbPassword)
class GoblinFirstName(peewee.Model):
firstName = peewee.CharField()
class Meta:
database = db
That’s it. One class, one field, one Meta binding it to the database connection. The connection itself is configured via environment variables, which means the same code runs identically in CI as it does in production.
Setting up the database at deployment is equally simple:
# src/database/setup.py
models.GoblinFirstName.create_table()
# ... repeated for each monster class
models.db.commit()
Repetitive, yes. A more mature setup would use proper migrations. But for a project where I controlled every deployment and the schema rarely changed, this was sufficient.
Wiring it up with Flask
With models in place, I needed a web framework to expose them as a REST API. I went with Flask — lightweight, minimal opinions, does what it says on the tin.
The pattern was a monster_endpoint base class as an abstraction over the PeeWee models, wired to Flask routes:
# src/app.py
@application.route('/api/v1.0/goblin', methods=['GET'])
@get_route
def get_goblin():
return GoblinEndpoint.return_name()
@application.route('/api/v1.0/goblin/firstName', methods=['POST'])
@monster_route
def post_goblin_first_name():
return GoblinEndpoint.insert_first_name(request)
@application.route('/api/v1.0/goblin/lastName', methods=['POST'])
@monster_route
def post_goblin_last_name():
return GoblinEndpoint.insert_last_name(request)
The @get_route decorator adds CORS headers. The @monster_route decorator handles all the error cases — unauthorised requests, missing API keys, unhandled exceptions — so the route functions stay clean. The abstraction worked well; adding a new monster class meant instantiating the base class with the right models and writing three routes.
POST endpoints authenticate by checking an x-api-key header against a plain-text lookup in the database — no hashing, no rotation, nothing you’d consider acceptable in a real system. It works for a personal project, but if you’re adapting this pattern, use JWT instead.
Teaching myself BDD on a project that couldn’t fail
At the time, I didn’t know pytest well. I did know Cucumber — I’d used it for infrastructure testing — and I knew Python had its own implementation in Behave. This became my sandbox for learning it properly, on a project where getting it wrong had no real consequences.
The premise is that tests read like plain English. A feature file defines scenarios in Given/When/Then format; a steps file implements the actual assertions. For a REST API with consistent behaviour across multiple endpoints, it maps naturally:
# features/api.feature
Feature: API functionality
Scenario Outline: /api/v1.0/goatmen
Given a <field> of <field_value>
Then I should be able to POST to <post_endpoint>
And GET to <get_endpoint> will contain <return_fields>
Examples:
| field | field_value | post_endpoint | get_endpoint | return_fields |
| firstName | Fluffy | /api/v1.0/goatmen/firstName | /api/v1.0/goatmen | fullName,firstName |
| firstName | Squiggles | /api/v1.0/goatmen/firstName | /api/v1.0/goatmen | fullName,firstName |
| firstName | Flopsy | /api/v1.0/goatmen/firstName | /api/v1.0/goatmen | fullName,firstName |
| firstName | Bugsy | /api/v1.0/goatmen/firstName | /api/v1.0/goatmen | fullName,firstName |
| firstName | Tooty | /api/v1.0/goatmen/firstName | /api/v1.0/goatmen | fullName,firstName |
The goatmen are named Fluffy, Squiggles, Flopsy, Bugsy, and Tooty. This was a game for the terminal. I had creative latitude.
The steps themselves are minimal:
# features/steps/steps.py
@given("a {field} of {field_value}")
def step_imp(context, field, field_value):
context.data = {field: field_value}
@then("I should be able to POST to {post_endpoint}")
def step_imp(context, post_endpoint):
req = post(context.base_url + post_endpoint, data=context.data, headers={"x-api-key": context.api_key})
assert_equal(req.status_code, 200)
@then("GET to {get_endpoint} will contain {return_fields}")
def step_imp(context, get_endpoint, return_fields):
req = get(context.base_url + get_endpoint)
response = req.json()
for field in return_fields.split(","):
assert_equal(field in response, True)
Running python3 -m behave executes 50 tests across every endpoint in under a second. Adding a new monster class means adding a new table in the feature file. Adding test data means adding a row.
Of everything in this project, the Behave suite has aged the finest. It still clearly communicates what the API does and how it behaves, and extending it is trivial. I use pytest by default these days, but the scenario table format suits REST endpoint testing well, and I don’t regret the choice.
The CI/CD: just enough
The CircleCI pipeline is sparse. On merge to master, it authenticates to AWS ECR, builds the Docker image, tags it with the commit SHA, and pushes it:
# .circleci/config.yml
- run:
name: Login to Prod ECR
command: |
set -eo pipefail
aws configure set aws_access_key_id $PROD_AWS_ACCESS_KEY_ID --profile default
aws configure set aws_secret_access_key $PROD_AWS_SECRET_ACCESS_KEY --profile default
aws ecr get-login-password --region $AWS_REGION --profile default | docker login --username AWS --password-stdin $PROD_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
- run:
name: Build image
command: |
docker build -t monsternames . --build-arg db_host="$DB_HOST" --build-arg db_name="$DB_NAME" --build-arg db_user="$DB_USER" --build-arg db_pwd="$DB_PWD" --build-arg web_host="$WEB_HOST"
- run:
name: Tag with circleCI tag and push
command: |
docker tag monsternames:latest $PROD_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/monsternames:${CIRCLE_SHA1:0:7}
docker push $PROD_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/monsternames:${CIRCLE_SHA1:0:7}
There’s no test step before the push. This is not a pattern to follow — the correct approach is to run your tests between build and push, so you only push images you’ve verified work. I can’t remember why I omitted this. Don’t repeat the mistake.
The general pattern — authenticate, build, tag with a commit reference, push — is sound. I’ve since implemented variations of it across multiple CI platforms. The maturer versions pull AWS credentials from Parameter Store rather than storing them as CI variables, but the structure is the same.
Running it in production (please don’t)
The infrastructure is an EC2 instance on a public subnet with an elastic IP, a security group restricting world access to ports 80 and 443, an RDS on a private subnet accessible only from the EC2, and Nginx as a reverse proxy:
server {
listen 443 ssl;
server_name monsternames-api.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 80;
server_name monsternames-api.com;
location / {
return 301 https://$host$request_uri;
}
}
SSL certificates are managed by certbot, renewed automatically via a cron job:
0 12 * * * /usr/bin/certbot renew --quiet
This has been running since July 2020. Certbot has renewed the certificates silently, without incident, ever since. It continued just fine until I gutted and rewrote the whole thing in 2026.
EC2 uptime costs money whether or not anyone’s using the API. Managing Docker on a VPS, patching the OS, and keeping an RDS alive added up to more operational overhead than a sporadic side project warranted. Lambda and DynamoDB were the obvious fit — I just didn’t reach for them until a few years later.
Fat Jonny Punching
Despite everything, the API does work. Here’s what you get from a goblin:
GET https://monsternames-api.com/api/v1.0/goblin
{
"firstName": "Fat Jonny",
"lastName": "Punching",
"fullName": "Fat Jonny Punching"
}
If you want to see this in practice, monster.mnuh.org uses the API — alongside an image generation API and the Simpsons quote API — to produce monster cards. I had no involvement in building it; someone found the API, thought it was amusing, and built something with it.
And then I learned Terraform
At some point I sat down, looked at my AWS bill, looked at the architecture, and felt something close to embarrassment.
I was paying for a t3.micro to sit idle for most of its life. I was managing Docker on a VPS. I was maintaining a RDS instance for a database that barely changed. Once I’d built real production infrastructure — Lambda, DynamoDB, API Gateway, Terraform — looking back at this felt like finding an old school essay. Technically functional, obviously written before I knew better.
So I rewrote it. The new version uses Lambda and DynamoDB, provisioned entirely with Terraform, with no servers to manage and a bill that rounds to zero. That’s a separate post.
The Behave test suite aged well. The Flask and PeeWee implementation did its job without complaint. The infrastructure taught me what I’d do differently with a few more years of cloud experience. Side projects don’t need to be architecturally sound — they need to be finished, and they need to teach you something. This one did both.
The source code is archived on GitHub if you want to poke around.