After writing about building Pixels some people have asked me about my technology stack for building Klart.co - A bookmarking tool for designers and Pixels - A collection of kick-ass designs. I love to answer questions about this but squishing them down into 140 characters on Twitter is a bit hard. So I decided to write a blog post about it. Here it goes 🤗.
This blog post will be a technical overview and will include very little code. It will also include a some abbreviations which you may or may not have to look up. I'm sorry about that 🙈. If you're looking for implementation details I'd love to answer your questions on Twitter or email.
To be able to make choices you have to know what your goals are. I wanted to maximize four variables: simplicity, performance, cost and privacy. We can model it as a function
f(s, p, c, p) which we want to maximize. I want to mention that I like to learn and build stuff myself so this is not for everyone. It's not optimized for time.
The frontend for Klart.co started out with Pug templates rendered on a Node/Express server. I had all styles in a single css file called
/snaps page with navigation and all but I would fetch all data from the
/api/users/current/snaps endpoint in JSON. I could have rendered the first snaps on the server but I figured it's easier to just have one source for the data.
Today the actual app (after you login) is built with React. I started out with just the
/snaps page in React since it had most state changes and interactions. But later, when other views got more complex I decided to write them in React and today the app is a single entry-point SPA. I'm not saying this is a silver bullet by any means. But for me, it was a lot easier to handle the different states with React.
Frontend build process
I'd love to be able to use something as
create-react-app for the frontend. However, Klart is what's commonly called a multi-page app (has multiple entry-points). So instead of using the eject feature of
create-react-app I decided it's better that I configure React and Webpack myself.
The configuration is nothing fancy. It's just one entry-point for each page/app. I also use
ExtractTextPlugin to extract css into their own files and
AssetsPlugin to append hashes to all files for cache busting (I only do this in production). I use a helper function that takes a filename without a hash as an argument and reads the asset manifest created by AssetsPlugin to find the corresponding file with hash included.
The backend consists of a couple of folders:
config/Application wide configuration such as whitelisted domains.
controllers/One controller per route group. For example,
middleware/Custom middleware for Express such as authentication and authorization.
helpers/Helper functions. For example getting a static file with hash included.
scripts/This is where I put migrations etc.
Worth mentioning is that I use the
Router Express class for all controllers. I also group controllers to add common middleware. For example, I have an
api.js controller which mounts all API controllers and has some error handling middleware to return JSON responses with appropriate status codes.
I use Mocha and Chai to test endpoints and database models. It does the job and their names make you happy ☕️. I don't use any CI/CD but will look into Gitlab's own solution soon.
I use MongoDB as database together with the Mongoose ORM. You could probably use plain Mongo but Mongoose helps a lot with validation, schema structure and population (which is pretty awesome). Population basically means that if you have a model
Snap(_id, image_url, _user) and
User(_id, name) you can tell Mongoose that you want to populate
_user and it will make an additional query to embed your user object in the Snap object. Pretty neat.
So far MongoDB and Mongoose has been excellent. I'm working on a plan for teams as well as some collaboration features which will add some more logic. We'll see how happy I am after that 😉.
The data on Klart is kept in sync across all your devices in real-time. I use SocketIO and Redis to keep track of sessions and sockets for each user. Whenever an update takes place, I push it out on the sockets that needs it.
I got a lot of images so one of my biggest concerns is how to serve them. I've looked at using Amazon S3 and short-time tokens for requests directly to Amazon. This was kind of a no-go for me since I want to use explicit access control on all requests and not relying on obscurity such as a time frame. I could proxy the requests through my own server though. My third option was to store the images myself and serve them straight from disk.
At this point Digital Ocean was beta testing block storage so I figured that storing the images myself would be a good option in terms of my goals and I could always migrate to block storage later on.
I use an awesome library called Passport for authentication. I store the sessions in a Redis database running on the same server as the application. I also have some custom middleware to handle authorization for users with different plans etc.
I use Stripe as a payment processor as well as subscription management. Besides a bunch of attempted payments being declined on launch, Stripe has worked above expectations and their API is extremely well documented. When a user invoice is paid, Stripe sends me a webhook request which I can verify and update the corresponding users plan in my database. If no webhook request comes, the user hasn't made payment and will eventually not have an active plan. Easy peasy.
Everything is deployed on a Digital Ocean droplet. The UI is great and I've been very happy with support and uptime so far. It feels like a breath of fresh air compared to Amazon's products (which is also awesome btw). I use IP tables to setup rules for what kind of network traffic is allowed.
I use PM2 to handle my Node process and load confiurations using dotenv and environment variables. I also have Nginx configured as a reverse proxy in front of my Node instances. Finally, I use CloudFlare in front of everything to speed things up a bit.
To handle deployments I've configured a bare GIT repository on the server. I've added this remote and named it
live on my development machine. To push anything to production I simply do
git push live and it will push it to the server. Once received the server will run a custom
post-receive hook that builds the app's frontend and compiles the backend code using Babel. When everything is done it will restart the app using PM2. I use the same setup for
git push staging to push to a staging environment.
I use Digital Ocean's snapshots and backup service to backup the whole droplet. This might not scale later on but for now it works great and I can be sure to have the full environment backed up and ready to be restored in case of emergency.
Talk about code, startups or something else? Say Hi 👋 on Twitter @drikerf.
Subscribe to Drikerf.com
Get the latest posts delivered right to your inbox