Employee Spotlight: John Lochbaum, Database Reliability Engineer
On a regular basis, we talk with our team members about their experience working at Canoe, and their future goals and aspirations. This week, we spoke with John Lochbaum, Database Reliability Engineer.
Tell us about your background. How did you find Canoe?
In 2015, I got my start in data at a software company that provided point-of-sale, event coordination, capacity planning, redemption and hosted/designed e-commerce for family entertainment centers around the world. Our most notable clients were the SkyZone indoor trampoline parks. From there, I took a job as a database administrator for a local government and managed the database-related projects, migrations, upgrades and access for the different departments of the county. After that, I spent about three years working for one of the largest wine closure companies in the world and helped manage their data and data-related projects.
I was looking for a change in venue when I spoke with Dustin on Canoe’s Talent Team and we started the interview process. The more I met the team members and heard about what services the company could provide, the more convinced I became that this was a great opportunity and environment to work in. In the five months I’ve been here, I can honestly say that it’s been even better than I imagined and I’m humbled daily by how talented my coworkers are.
As a database reliability engineer, what are some of the key responsibilities of your role?
My role is to help lead in the planning, managing, and scaling of our company’s databases to ensure that our business requirements are met and that our data can be accessed in a fast, reliable, and safe manner. This means that I am concerned with topics like backup solutions, database access procedures, capacity planning, performance tuning and optimizations, database design/implementations, and data-related software changes. I see my position as a supportive one, where I am trying to empower users (my colleagues and our clientele) with the data they need to accomplish their goals.
What projects are you working on in 2023?
There are a lot of projects that I am involved in and a few that are really exciting to me. We’re working on various automation projects for our databases, which will further empower our developers and QA team in the areas of functional testing, integration testing and UAT. This focus will result in reduced time to market for our client-facing products. We’re also focusing on optimizing ways that we are storing our data and optimizing performance around it. This could potentially unlock a lot for our company like making use of read replicas, allowance for better data transfer tools, and a path forward to using Amazon Aurora.
Our current database build process requires a lot of manual support and is harder to scale to meet the needs of our growing engineering team. To meet this demand, we are leveraging a set of industry leading tools (Python, Apache Airflow and AWS APIs) to overhaul this process. Our aim is to drive developer efficiencies and reduce time to market for our client-facing products.
This database build overhaul project will allow us to automate the process of taking database snapshots, applying a new Key Management Service (KMS) encryption, restoring to a lower environment, creating a new database instance and standing up a database that can be used for functional testing, integration testing and UAT environments.
Additionally, this new process also allows us to run a series of sanitization scripts and data quality checks followed by several PHP artisan scripts to seed the test data and user account information our QA team needs. Once finished, we’ll be able to schedule this refresh on a periodic basis or even kick it off on demand when requested by our engineering and product team.
I am also working on a number of other database optimization and data model design projects that allow me to make a direct impact on the quality of service we provide to our customers.
How does your work intersect with other departments at Canoe?
My work mostly intersects within the engineering department although I do interact with the product department on data related projects and changes. In a broader sense, when database performance is improved, that has a positive impact on everyone who accesses our data.
What trends are you watching from an engineering perspective in the fintech space? Which trend do you see most applicable to Canoe?
I think a great deal of companies have been leveraging on-demand cloud computing platforms like AWS because they are such force multipliers and can allow smaller companies to offer competitive goods and services at scale. At Canoe, I see us constantly exploring how we can make our customer experience and product offerings better by maximizing cloud-based technologies.