CloudTech Innovations logo

Databricks Data Engineer

CloudTech Innovations

Toronto, Canada

Share this job:
Posted: 6 hours ago

Job Description

<p><b>Job Title: Data Engineer - Databricks</b></p><p><b>Location:</b> Onsite - Toronto, Canada</p><p><b>Employment Type:</b> Contract</p><p><br></p><p><br></p><p><b>About the Role</b></p><p><b>We are seeking an experienced Data Engineer</b> with a strong background in <b>Databricks</b>, <b>Apache Spark</b>, and <b>modern cloud data platforms</b>. The ideal candidate has over <b>5 years of experience</b> designing, developing, and maintaining scalable data pipelines and lakehouse architectures in enterprise environments. You will work closely with solution architects, analysts, and cross-functional teams to build robust, high-performance data solutions supporting analytics and machine learning workloads.</p><p><br></p><p><br></p><p><b>Key Responsibilities</b></p><p><b>• Design and implement ETL/ELT pipelines</b> using <b>Databricks</b> and <b>Apache Spark</b> for batch and streaming data.</p><p> • Develop and maintain <b>Delta Lake</b> architectures to unify structured and unstructured data.</p><p> • Collaborate with <b>data architects, analysts, and data scientists</b> to define and deliver scalable data solutions.</p><p> • Implement <b>data governance</b>, access control, and lineage using <b>Unity Catalog</b>, IAM, and encryption standards.</p><p> • Integrate Databricks with cloud services on <b>AWS, Azure, or GCP</b> (e.g., S3, ADLS, BigQuery, Glue, Data Factory, or Dataflow).</p><p> • Automate workflows using orchestration tools such as <b>Airflow</b>, <b>dbt</b>, or native cloud schedulers.</p><p> • Tune Databricks jobs and clusters for performance, scalability, and cost optimization.</p><p> • Apply <b>DevOps principles</b> for CI/CD automation in data engineering workflows.</p><p> • Participate in Agile ceremonies, providing updates, managing risks, and driving continuous improvement.</p><p><br></p><p><br></p><p><b>Required Qualifications</b></p><p><b>• 5+ years</b> of professional experience in <b>data engineering</b> or <b>data platform development</b>.</p><p> • Hands-on experience with <b>Databricks</b>, <b>Apache Spark</b>, and <b>Delta Lake</b>.</p><p> • Experience with at least <b>one major cloud platform</b> - <b>AWS</b>, <b>Azure</b>, or <b>GCP</b>.</p><p> • Strong proficiency in <b>Python</b> or <b>Scala</b> for data processing and automation.</p><p> • Advanced knowledge of <b>SQL</b>, query performance tuning, and data modeling.</p><p> • Experience with <b>data pipeline orchestration tools</b> (Airflow, dbt, Step Functions, or equivalent).</p><p> • Understanding of <b>data governance</b>, <b>security</b>, and <b>compliance</b> best practices.</p><p> • Excellent communication skills and ability to work <b>onsite in Toronto</b>.</p><p><br></p><p><br></p><p><b>Preferred Skills</b></p><p><b>• Certifications in Databricks</b>, <b>AWS/Azure/GCP Data Engineering</b>, or <b>Apache Spark</b>.</p><p> • Experience with <b>Unity Catalog</b>, <b>MLflow</b>, or <b>data quality frameworks</b> (e.g., Great Expectations).</p><p> • Familiarity with <b>Terraform</b>, <b>Docker</b>, or <b>Git-based CI/CD pipelines</b>.</p><p> • Prior experience in <b>finance, legal tech, or enterprise data analytics</b> environments.</p><p> • Strong analytical and problem-solving mindset with attention to detail.</p>
Back to Listings

Create Your Resume First

Give yourself the best chance of success. Create a professional, job-winning resume with AI before you apply.

It's fast, easy, and increases your chances of getting an interview!

Create Resume

Application Disclaimer

You are now leaving Careeler.com and being redirected to a third-party website to complete your application. We are not responsible for the content or privacy practices of this external site.

Important: Beware of job scams. Never provide your bank account details, credit card information, or any form of payment to a potential employer.