One of my first projects after coming to Proton was to create a reusable card component. A relatively simple task, but with one catch. The card needed to dynamically hide certain UI elements based on the type of data provided. Not a problem, I thought: you can conditionally render those elements as needed. Unfortunately, this approach did not stand the test of time. As we added more card variations, the component got bigger and bigger, and more and more difficult to maintain and extend.
We recently ran into a minor problem that got me frustrated. Everything we run happens inside containers built with Docker, including our frontend. For security, we use Aqua’s Trivy tool to scan any outgoing containers before they’re pushed to our repository, where they could then be deployed. Typically, these are easy enough to fix. We read the scan report, up the version with a patch, update our package configuration, and the build passes the security scan. This time, though, I ran into a problem with our frontend that wasn’t so easy to resolve.
When I joined Proton I didn’t anticipate spending a lot of time focusing on hiring other engineers. Six months in, I’ve managed a few different job postings, and I feel like I’ve learned a lot from being on both sides of the interview. Given the number of questions we get about our interviews, I thought I’d share. Initially one of those lessons was a revitalized sense of impostor syndrome as I came to see how high a standard we hold our interviewees to in order to proceed; however upon closer inspection I’ve come to the conclusion that a lot of talented individuals have the skills to succeed, but may be failing at communicating them.
At Proton, data are our bread and butter. The core of using Proton for our distribution clients is to allow us to make sense of their data. As we grow with both new clients and exciting new features for current clients, we gain access to more data from various sources. With this, it is imperative that we have an established process of receiving, processing and storing the data so that it can power our systems and AI models.
Before I joined Proton, I was someway into a PhD in Biomedical Engineering. After many nights in the lab, I had realized this path wasn’t for me. When I got a call from Benj, our CEO, one evening and he told me I was being offered the data scientist position at Proton, I didn’t have to think twice. I took one week, which also coincided with spring break at my university, to figure out the logistics of leaving a PhD program and set my start date for the coming Monday. Little did I know that day would also be in the middle of a global pandemic, and the company’s very first day of work from home.
If you’ve worked with Kubernetes, odds are you’ve seen YAML manifest files that can be thousands of lines long. At Proton, we’re heavy Kubernetes users, and at one point relied on static resource definitions for all of our services. Because we don’t change these often — and never as part of a regular deployment cycle — it was clear these long YAML files would easily become outdated.
While I know this is our engineering blog, we get many questions about our hiring process, especially when it comes to technical roles. I thought it would be good to write everything down in one place and share a bit of philosophy in the process.
subscribe via RSS