View All Jobs 114391

Devops Engineer - Remote Eligible

Automate and manage Kubernetes infrastructure on Google Cloud Platform using Terraform.
Mexico City
Mid-Level
MXN850,000 – 950,000 MXN / year
1 week ago
Peek

Peek

An online platform offering a marketplace for travelers to discover and book unique experiences and activities.

DevOps Engineer

We are looking for our next DevOps Engineer. Someone will passionately contribute to infrastructure-as-code, accelerating development and deployment processes, increasing reliability, and scaling our platform as our company grows.

Our team is 100% remote; however, we prefer candidates in the same time zones as the greater United States (UTC-10 to UTC-4).

This is an on-call position and will require you to be part of an on-call schedule. We will also occasionally require you to work outside of normal business hours on infrastructure upgrades and maintenance. We are committed to working with you to keep a healthy and balanced schedule.

About The Team:

We're a small DevOps team supporting 50+ engineers, building a service-oriented architecture on top of Kubernetes and GCP. We own all aspects of the SDLC but strive to automate self-service wherever possible. Being a small team, we also practice SRE, continuously improving our observability and building nearly everything with Infrastructure-as-Code. Security and compliance best practices are integral to our workflows, ensuring systems are secure by design and meet regulatory and organizational standards. Our team is remote but highly organized to meet the demands of a fast-paced environment. Our primary business language is English, and we emphasize strong communication skills.

About You:

You are an experienced cloud engineer with at least 3+ years managing Google Cloud Platform (GCP) and/or Amazon Web Services (AWS), including services such as Compute Engine, Kubernetes Engine, Cloud SQL (PostgreSQL), Memorystore (Redis), Cloud DNS, Route53, S3, IAM security, VPC, and Security Groups. You have a strong track record operating large-scale, high-availability, asynchronous, distributed systems, deploying and managing service-oriented architectures, and improving application performance and solving scaling challenges.

You have hands-on experience running Kubernetes in production using Helm, and you are skilled with infrastructure-as-code technologies such as Terraform or Pulumi. You understand how to design and implement robust monitoring and reporting solutions using tools like Prometheus, Grafana, or New Relic. You have a solid understanding of networking (routers, switches, load balancing, DNS, VPN, TLS). You are experienced in working with source control and CI/CD systems such as Git/GitHub, Jenkins, Codefresh, or ArgoCD.

You can code in one or more programming languages such as Python, TypeScript, or Go. You have experience with data warehousing using BigQuery or Redshift. You are security-minded and strive to ensure security and compliance best practices throughout the SDLC to meet SOC2 and PCI requirements, especially when handling PII.

You are comfortable working with serverless platforms like GCP Cloud Run and Cloud Functions. You enjoy building playbooks and mentoring others, sharing knowledge to strengthen the team as a whole.

Requirements:

  • At least 3 years of experience as a DevOps Engineer or Platform Engineer
  • Hands-on experience with Kubernetes, including the ability to troubleshoot cluster-related issues.
  • Proficiency with Infrastructure as Code (IaC) tools such as Terraform or Pulumi.
  • Strong scripting skills in Bash and Python, with experience writing automation scripts for CI/CD pipelines.
  • Experience working with a major cloud provider (AWS, GCP, or Azure), and a solid understanding of networking concepts such as VPCs, DNS, TLS, load balancing, and VPNs.
  • Solid understanding of the software development lifecycle (SDLC) and modern CI/CD systems such as GitHub Actions, Jenkins, Codefresh, or ArgoCD.

Nice To Haves:

  • Experience explicitly using GCP (Google Cloud Platform). This is where we deploy 95% of our infrastructure.
  • Experience with some high-level programming languages such as Python, Ruby, and Typescript.
  • Experience working with databases such as PostgreSQL and MongoDB
  • Experience working with data warehouses such as Redshift and BigQuery
  • Experience with caching systems such as Redis and Fastly
  • Experience working with serverless platforms such as GCP Cloud Run, GCP Cloud Functions, AWS Lambda.

What To Expect In The Interview Process:

  1. Meet the Recruiter: Discuss the requirements of the role and learn more about Peek's culture
  2. Meet the Hiring Manager
  3. Infrastructure Challenge, followed by meeting the team
  4. Meet a Stakeholder
  5. Meet an Executive
  6. References and Offer

Peek invests in our employee's health and well-being. We've built our benefits package around our Peekster's needs including full health care, dental, and vision plans, paid parental leave, company recharge at the end of the year, and competitive compensation packages that include significant equity upside that allows you to share in Peek's long-term success.

Peek Travel Inc. is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you are unable to apply because of incompatible assistive technology or a disability, please contact us at talent@peek.com. We will make every effort to respond to your request for disability assistance as soon as possible.

+ Show Original Job Post
























Devops Engineer - Remote Eligible
Mexico City
MXN850,000 – 950,000 MXN / year
Engineering
About Peek
An online platform offering a marketplace for travelers to discover and book unique experiences and activities.