OVERALL PURPOSE OF THE ROLE
As a Technical Specialist at Alstom, you will help transform our data into tangible business value by developing data pipelines, analyzing information, communicating outcomes, and collaborating on product development. Work with the best in class open source and visual tools, along with the most flexible and scalable deployment options. Whether it’s investigating trends or patterns, you will work to solve real world problems. Empowered by data science, we build cognitive and analytic applications in catalyzing Alstom’s transformation journey as a premier analytics enterprise. Adopting Kaizen principles, our team include data scientists, data engineers, as well as automations, operations, and process experts. We want the brightest minds doing work that inspires, in a fast-changing environment where growth is encouraged. You get to discover their potential, so they are supported to build breakthroughs that help our business succeed. We are a diverse team with people who want their ideas to matter. Join us — you’ll be proud to be part of our team.
Organizational Reporting
Reports to Delivery Manager
Networks & Links
Internally
Applications Owners
Application Architects
Business Stakeholders
Digital Platforms
Digital Incubation team
Externally
Third-party technology providers
Strategic Partners
Peer companies
Location
Position will be based in Bangalore
Willing to travel occasionally for onsite meetings and team workshops as required
RESPONSIBILITIES: –
The candidate primary focus will be in building data pipelines, creating and maintaining MLOps workflows, developing core libraries for Data Science projects and building high quality prediction systems using large data sets. Candidate will be responsible for end-to-end performance and availability of the Data Science platform. They must drive business results with tangible business value using data-based insights working with a wide range of stakeholders and functional teams.
Collaboration & team efforts:
Collaborate with Incubation & other analytics team along with subject matter experts (SMEs) to drive and operationalize analytical and cognitive prototypes
Develop features to deliver functionality defined in the Platform roadmap
Establish standards and ways of working for collaborative development
Automate access provisioning and monitor system usage using automated routines
Conduct research from which you’ll develop prototypes and proof of concepts
Look for opportunities to use insights/datasets/code/models across other functions in the organization (for example in the HR and marketing departments)
Maintain clear and coherent communication, both verbal and written, to understand data needs and report results.
Technical efforts:
Develop data pipelines to collect and collate data for Analytical projects
Administration of Dataiku Platform
Responsible for the Operations of the Dataiku Platform
Identify common processing steps and develop re-usable code libraries
Develop MLOps workflows for accelerated development and deployment of Analytical models
Create automated testing routines to enable continuous integration and continuous deployment of models
Implement version control for all data and code artifacts
Perform root cause analysis for job failures in prod and implement corrective action
Implement containerized deployment strategies for scalability and availability
Extend platform features by integrating open source plugins
Measure and monitor performance drift of models in production
Assess the effectiveness and accuracy of new data sources and data gathering techniques.
Develop models and perform statistical analysis to identify key areas, as needed
EDUCATION:-
A Degree in a quantitative field (Statistics, Mathematics, Operations Research, Economics, Computer Science etc). A postgraduate degree involving research in quantitative field will be a plus.
EXPERIENCE:-
5+ years of overall experience. 3+ years of experience working in data domain building data pipelines and analytical solutions
COMPETENCIES & SKILLS:
Technical:
Strong programming foundation in Python is a must
Expertise in building data pipelines using Python, Spark and other SaaS based ETL/ELT tools
Working knowledge of SQL, Relational and No-SQL databases
Experience processing large amounts of structured and unstructured data, including integrating data from relational and Saas APIs.
Experience working in Linux/Unix environment and exposure to command line utilities.
Understanding of containerized deployments using Docker, Kubernetes etc.
Exposure to JIRA, GIt and Jenkins
Knowledgeable in DevOps/MLOps implementation methodologies
knowledge of data mining, machine learning, natural language processing, or information retrieval.
Good understanding of machine learning techniques and data science methods (i.e. CRISP-DM)
Experience with data visualization using tools such as Qlik, PowerBI, Tableau etc.
Behavioral:
Comfort working in a dynamic, research-oriented group with several ongoing concurrent projects
Great communication (oral and written) skills and strong team player. Be fluent in English
Ability to learn new techniques and thrive in fast-changing environment
Ability to translate complex and abstract problems into technical requirement.
Have experience in working with remote teams
Able to demonstrate the ability to use this technical knowledge in collaboration with other departments (engineering,
procurement, projects)
Have a strong sense of customer service and satisfaction
An entrepreneurial mind set willing to think out of the box – Intellectual Curiosity & creativity.
Having solution orientated mindset and be pragmatic in execution.
An agile, inclusive and responsible culture is the foundation of our company where diverse people are offered excellent opportunities to grow, learn and advance in their careers. We are committed to encouraging our employees to reach their full potential, while valuing and respecting them as individuals.
Click here to Apply Online