MUFG Union Bank Jobs

Mobile mufg Logo

Job Information

MUFG Union Bank Data Engineer in Charlotte, North Carolina

Data Engineer - 10044867-WD


Do you want your voice heard and your actions to count?

Discover your opportunity with Mitsubishi UFJ Financial Group (MUFG), the 5th largest financial group in the world (as ranked by S&P Global, April 2020). In the Americas, we’re 13,000 colleagues, striving to make a difference for every client, organization, and community we serve. We stand for our values, developing positive relationships built on integrity and respect. It’s part of our culture to put people first, listen to new and diverse ideas and collaborate toward greater innovation, speed and agility. We’re a team that accepts responsibility for the future by asking the tough questions and owning the solutions. Join MUFG and be empowered to make your voice heard and your actions count.

Job Summary

MUFG Americas is embarking on a business and technology transformation to effectively deliver five key business imperatives: Growth, Business Agility, Client Experience, Effective Controls, and Teamwork. To accomplish these imperatives, MUFG has launched a Transformation Program built upon the following foundation pillars:

  1. Core Banking Transformation Program

  2. Data Governance, Infrastructure & Reporting Program

  3. Technology Modernization Program

This position supports the Core Banking Transformation (CBT) Program. CBT is a multi-year effort to modernize our deposits platform with a premier digitally-led and simplified ecosystem for consumer, small business, commercial and transaction banking to deliver exceptional customer experience and provide the bank a high-reaching advantage in the market. Our customers will benefit from streamlined and automated processes that simultaneously will provide the bank business process efficiencies and operational cost savings.

Role Summary:

The Core Banking Transformation technology team seeks a hardworking Data Engineer who is collaborative and passionate about solving complex data engineering problems. This role is responsible for design, build, implementation, monitoring, and management of the MUFG Core Banking data services gateway that provides the foundations for the technology modernization and digital transformation.

As a data platform engineer, you will focus on building the firm’s next generation data environment. You will be a key player in creating a data services platform that drives real-time decision-making in service of our customers. You will develop, build, and operate the platform using DevSecOps and System Reliability Engineering (SRE) methods.

Major Responsibilities:

  • Work closely with architecture teams to select, design, develop and implement optimized solutions and practices

  • Create and maintain optimal data pipeline architecture, responsibilities include the design, implementation, and continuous delivery of a sophisticated data pipeline supporting development and operations

  • Gather and process large, complex, raw data sets at scale (including writing data pipelines, scripts, calling APIs, write SQL queries, etc.) that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • Analyze complex data / data models and focus on the data research of multi-functional requirements, source and target data model analysis to develop and support the end-to-end data mapping effort

  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Programming language like Python, Hive/Spark, SQL/NoSQL, pipeline tools and technologies.

  • Proficient in usage of distributed revision control systems with branching, tagging (git). Create and maintain release and update processes using open source build tools

  • Develop and deliver ongoing releases using tiered data pipelines and continuous integration tools like Jenkins

  • Solid experience with environment and deployment automation, infrastructure-as-code, deployment data pipeline specification and development.

  • Work with partners including the Business, Infrastructure and Design teams to assist with data-related technical issues and support their data infrastructure needs.

  • Be a data authority to strive for greater functionality in our data systems.

  • Responsible for production readiness and all operational aspects of the new data services that will support critical MUFG applications

  • Partner with Risk Management and Security team to identify the standards and required controls, and lead the design, build, and rollout secured and compliant data services to support MUFG critical business applications and workload

  • Partner with application and DBA teams to experiment, design, develop and deliver on-premise as well as cloud native solutions and services, and power the digital transformations across business units

  • Embrace Infrastructure-as-Code, and leverage Continuous Integration / Continuous Delivery Pipelines to run the full data service lifecycle from release of data service offerings into production through the retirement thereof

  • Participate in software and system performance analysis and tuning, service capacity planning and demand forecasting

  • Has the ability to write infrastructure, application and data test cases and participate in code review sessions.

  • Performance analysis and tuning of infrastructure and data processing

  • Provide Level 3 support for troubleshooting and services restoration in Production

Management or Supervision: Yes


Required Education & Certifications:

  • Bachelor's degree in computer science or related field, or equivalent professional experience

Required Knowledge, Skills, and Experience:

  • 7-10 years of meaningful technical experience, with at least 5 years of experience in design, development and delivery of critical data solutions in large complex IT environment, poses Experienced level skills in 3 or more of the following areas:

  • Data Warehouse, Data Mart and Data Vaults

  • Data Backup / Restore, Replication, Disaster Recovery

  • Data field encryption and tokenization

  • Application design / develop / test experience with RDBMS. Knowledge in NoSQL is a plus.

  • Database Administration experience with Relational and NoSQL databases

  • Metadata management

  • Data Services solution design and implementation experiences in on-premise or cloud native environment, poses Expert level skills in 4 or more of the following areas:

  • Expertise on BigData and ETL technologies like Hive, Spark, AWS Glue, Redshift and other distributed systems.

  • Strong expertise in relational SQL and NoSQL databases, including Postgres, Amazon RDS, DynamoDB etc.

  • Experience with object-oriented/object function scripting languages: Python, Java, etc.

  • Strong knowledge on cloud and distributed storage systems like S3, HDFS etc.

  • Experience with data pipeline and workflow tools: like Syncort.

  • Experience with stream-processing systems: Kafka, AWS Kinesis, Apache Storm, Spark-Streaming, etc. is a plus.

  • A successful history of manipulating, processing and extracting value from large disconnected datasets with ETL and Data engineering know how of SQL, Informatica PowerCenter or similar.

  • Experience with cloud services platform for Data Management and Integration

  • Awareness of data governance aspects like metadata, business glossaries, data controls, data protection, canonical models, etc.

  • Experience to Develop, Deploy and Manage application in Cloud (AWS, GCP etc.)

  • Strong scripting experience with automating processes and deployments using tools such as scripting (bash, python etc.)

  • Familiar with DevOps toolchain, i.e. BitBucket, JIRA, Jenkins Pipeline, Artifactory or Nexus, GIT, Ansible and experienced in automate and deploy n-tier application stack in cloud native environments

  • Excellent data & system analysis, data mapping, and data profiling skills

  • Demonstrate good understanding of modern, cloud-native application models and patterns

  • Excellent collaboration skills and a passion for problem solving, with the ability to work alternative coverage schedules

  • Strong verbal and written communication skills required due to the dynamic nature of collaboration with leadership, customers, and other engineering teams

  • Bachelor’s degree in Computer Science, or a related field

Desired Knowledge, Skills, and Experience:

  • Experience within a high integrity, and/or regulated environment (government, healthcare, financial sectors, etc.)

  • AWS professional level certifications is preferred but not required

The above statements are intended to describe the general nature and level of the work being performed. They are not intended to be construed as an exhaustive list of all responsibilities, duties, and skills required of personnel so classified .

We are proud to be an Equal Opportunity / Affirmative Action Employer and committed to leveraging the diverse backgrounds, perspectives, and experience of our workforce to create opportunities for our colleagues and our business. We do not discriminate in employment decisions on the basis of any protected category.

A conviction is not an absolute bar to employment. Factors such as the age of the offense, evidence of rehabilitation, seriousness of violation, and job relatedness are considered in all employment decisions. Additionally, it’s the bank’s policy to only inquire into a candidate’s criminal history after an offer has been made. Federal law prohibits banks from employing individuals who have been convicted of, or received a pretrial diversion for, certain offenses.

Job : Technology

Primary Location : ARIZONA-Tempe

Other Locations : TEXAS-Dallas, NORTH CAROLINA-Charlotte

Job Posting : Jul 1, 2021, 8:17:05 AM

Shift: : Day

Schedule: : Full Time

Req ID: 10044867-WD