The Trade Desk is a global technology company with a mission to create a better, more open internet for everyone through principled, intelligent advertising. Handling over 1 trillion queries per day, our platform operates at an unprecedented scale. We have also built something even stronger and more valuable: an award-winning culture based on trust, ownership, empathy, and collaboration. We value the unique experiences and perspectives that each person brings to The Trade Desk, and we are committed to fostering inclusive spaces where everyone can bring their authentic selves to work every day.
Do you have a passion for solving hard problems at scale Are you eager to join a dynamic, globally- connected team where your contributions will make a meaningful difference in building a better media ecosystem Come and see why Fortune magazine consistently ranks The Trade Desk among the best small- to medium-sized workplaces globally.
What we do
This specialized role is within the Technology operations group of The Trade Desk's Engineering. This group is focused on delivering world-class solutions for Enterprise needs within The Trade Desk.
We are seeking a skilled and motivated Software Engineer II - Data Engineer to join our growing data team. In this mid-level role, you will be instrumental in developing, building, and maintaining the data pipelines and architecture that enable our organization to turn raw data into actionable insights. You will work on complex data problems, collaborate with cross-functional teams, and help ensure the reliability, efficiency, and quality of our data systems.
What you'll do:
Data Pipeline Development:Design, build, and optimize scalable ETL/ELT pipelines for both batch and real-time data processing from disparate sources.
Infrastructure Management:Assist in the design and implementation of data storage solutions, including data warehouses and data lakes (e.g., Snowflake, S3, Spark), ensuring they are optimized for performance and cost efficiency.
Data Quality and Governance:Implement data quality checks, monitor data pipeline performance, and troubleshoot issues to ensure data accuracy, reliability, and security, adhering to compliance standards (e.g., GDPR, CCPA).
Collaboration:Work closely with product managers, data scientists, business intelligence analysts, and other software engineers to understand data requirements and deliver robust solutions.
Automation and Optimization:Automate data engineering workflows using orchestration tools (e.g., Apache Airflow, Dagster, Azure Data Factory) and implement internal process improvements for greater scalability.
Mentorship:Participate in code reviews and provide guidance or mentorship to junior team members on best practices and technical skills.
Documentation:Produce comprehensive and usable documentation for datasets, data models, and pipelines to ensure transparency and knowledge sharing across teams.
Who you are:
Bachelor's degree in computer science, information security, or a related field, or equivalent work experience. Masters degree preferred.
4+ years of experience in a Data engineering role and have a broad understanding ofData Modeling,SQL,OLAP, and ETL required. Experience working with data pipelines including data modeling at petabyte scale is a bonus.
4+ years of experience working with multiple database platforms by designing and implementing data and analytics solutions using technologies such as Snowflake, Databricks, Vertica, SQL Server, andMySQL required.
4+ years of experience required in one or moreprogramming languages,particularly SQL. Proficiency in the following programming languages also required: PL/SQL,Python, C#, Scala or Java.
Experience with workflow technologies like Spark, Airflow, Glue, Prefect or Dagster required
Experience with version control systems, specifically Git required.
Familiarity with DevOps best practices
CLZUU DOMUU SK111 SK222 SK333 SK444 SK555 SK666