Job DescriptionRole: Data Architect Onsite ( USA ) Level - 7Job Summary :Looking for data architect (onsite) , who will lead i nvolves in designs and maintains the data infrastructure that enables storage, access,and analysis across the business and client needs . They define how data is collected, structured, integrated, and governed to ensurescalability, security, and performance.Years of experience needed : 1 5 + Years in Data Engineering in insurance domainResponsibilities :Key responsibilities include modelling database structures, selecting technologies, setting governance policies, and working closelywith engineers ( clients ) , analysts, and compliance teams. They play a critical role in supporting business intelligence, and regulatoryinitiatives. Defining data architecture strategies, frameworks, and models Designing and optimizing databases, warehouses, and data lakes Ensuring data structures meet business and compliance needs Collaborating with data engineers and analysts on pipeline architecture Overseeing metadata management, catalogues, and lineage tracking Ensuring data integrity, scalability, and performance Selecting appropriate storage and cloud platforms (e.g. Snowflake, AWS, Azure, Big Query , Redshift) Supporting data governance and access control policies Reviewing existing systems for improvement or migration Documenting technical standards and architectural decisions System design with strategic alignment and governance.Technical Skills: E xperience designing, implementing, and managing data analytics using Databricks in insurance domain. Proven experience in developing and implementing ETL pipelines from various data sources using Databricks on cloud AWS Design and implement scalable ETL pipelines using Databricks to process and transform data from multiple sources. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high - qualitydata solutions. Optimize data workflows and ensure data quality and integrity throughout the ETL process. Monitor and troubleshoot data pipeline performance, implementing improvements as necessary. Work with cloud technologies, specifically AWS, to manage data storage and processing resources effectively. Document data engineering processes, architecture, and best practices to ensure knowledge sharing within the team. Stay updated with the latest trends and technologies in data engineering and cloud computing. Strong proficiency in Python, PySpark and SQL with hands - on experience developing data pipelines. Data Modeling/ Data Lineage and awareness of Canonical data model implementation Experience in Medallion Architecture implementation Experience in working Insurance domain Experience with JIRAMandatory Skills Strong expertise in Databricks, including the ability to develop and optimize ETL pipelines. Proven experience with AWS cloud services, particularly in data storage and processing. Solid understanding of data modeling, data warehousing, and data integration techniques. Proficiency in programming languages such as Python or Scala for data manipulation and transformation.Talent Fulfillment Group JD Format Strong proficiency in Python, PySpark and SQL with hands - on experience developing data pipelines. Familiarity with data governance and data quality frameworks. Experience in Power BI, GitLab Experience in working with Agile methodologies.Qualification Education qualification MCA or bachelors degree in engineering from a reputed college Databricks certification is preferred. LOMA certification in Insurance is preferredAbout MphasisMphasis applies next - generation technology to help enterprises transform businesses globally. Customer centricity is foundational toMphasis and is reflected in the Mphasis Front2Back Transformation approach. Front2Back uses the