Job Description
Job Description
Database Engineer (W2 ONLY - REMOTE)
We are seeking an experienced Staff Database Engineer to design. build. and optimize complex data systems. This senior-level contractor will work across multiple domains including data architecture pipeline development and system operations.
Job Description
- Design and implement scalable and reliable data architectures that support large-scale data processing, transformation, and analysis.
- Develop, maintain, and optimize ETL/ELT pipelines using modern tools and frameworks to move and transform data from diverse sources.
- Build and support high-performance cloud-based systems for real-time and batch processing, including data lakes, warehouses, and mesh architectures.
- Collaborate with stakeholders across engineering, data science, and product teams to gather requirements and deliver actionable data solutions.
- Interface with Electronic Health Records (EHR) and healthcare data formats to ensure integration accuracy and compliance.
- Own operational excellence for data systems, including logging, monitoring, alerting, and incident response.
- Utilize advanced programming skills in .NET, Java, or similar, and SQL to engineer robust data services.
- Contribute to architecture frameworks and documentation to guide team standards and best practices.
- Act as a subject matter expert and SME, mentoring junior engineers and promoting engineering excellence across the organization.
Must Haves
- 7-10 years of professional experience in data engineering, software development, or database systems.
- Proven experience with SSIS and ADF.
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent experience.
- Expertise in SQL database systems and modern data processing tools and frameworks.
- Strong proficiency in at least one programming language, such as .NET, Java, Python, etc.
- Demonstrated experience with modern cloud platforms, such as Azure, AWS, or GCP.
- Familiarity with data streaming and queuing technologies, such as Kafka, SNS, RabbitMQ, etc.
- Understanding of CI/CD pipelines, infrastructure-as-code, Terraform, and containerized deployments, such as Kubernetes.
- Comfortable with production system support, debugging, and performance optimization.
- Strong problem-solving, communication, and collaboration skills.
- High-level understanding of big data design patterns and architectural principles, such as data lake vs. warehouse vs. mesh.
- Experience with RESTful APIs and integrating external data sources into internal systems.