This job posting has expired and no longer is available. Please explore other opportunities.

Data Pipeline Developer

Remote (Canada) Calgary, AB

At Arcurve, we believe that work should be an enjoyable experience, and that the best “aha” moments come through team learning and continuous motivation. We know the key to success is collaboration, and that you can’t put a value on accountable, transparent, and authentic interactions. We strive to deliver exceptional service while creating lasting relationships with our employees, our students, our clients, and our community.


We’re looking for an authentic, collaborative, accountable, talented and experienced Data Pipeline Developer to join Arcurve's Advanced Analytics team.

 

YOU ARE

  • Passionate about technology
  • An authentic and creative human
  • Driven to succeed
  • A believer in the importance of teamwork
  • Community-minded
  • An expert problem solver
  • Someone who thrives on challenge
  • Motivated by exceptional results
  • Someone who cares about your clients


THE GOAL

You will use various methods to transform raw data into useful data systems. Building robust data pipelines from many different sources is a critical to the data projects we execute for our customers. Your role will be to ensure that data is easily accessible, works smoothly, and is performant.


Overall, you’ll strive for efficiency by aligning data systems with business goals. This means you will have to liaise with the business intelligence analysts, data scientists, and business users. Design and architecture skills are required to both implement reliable systems and understand the client architectures used as data sources. Data Pipeline Developer skills also include familiarity with several programming languages and knowledge of machine learning methods. If you are detail-oriented, with excellent organizational skills and experience in this field, we’d like to hear from you!

 

THE RESPONSIBILITIES

  • Analyze and organize raw data
  • Build data systems and pipelines
  • Evaluate business needs and objectives
  • Interpret trends and patterns
  • Conduct complex data analysis and report on results
  • Prepare data for prescriptive and predictive modeling
  • Build algorithms and prototypes
  • Combine raw information from different sources
  • Explore ways to enhance data quality and reliability
  • Identify opportunities for data acquisition
  • Develop analytical tools and programs
  • Collaborate with data scientists and architects on several projects

THE REQUIREMENTS

  • Previous experience as a data engineer/data pipeline developer or in a similar role
  • Experience with Azure Data Factory, IoT Hub, Functions, Databricks, Synapse, Spark
  • Experience with AWS Lambda and Amazon Redshift
  • Technical expertise with data models, and data mining
  • Knowledge of programming languages (e.g. Python, R, JavaScript)
  • Knowledge of message oriented architectures, AMQP, RabbitMQ, and Kafka considered a plus
  • Experience developing, deploying, and maintaining SCADA systems (OSI, AVEVA/Wonderware, PI)
  • Hands-on experience with database design
  • Superb numerical and analytical skills
  • Degree in Computer Science, IT, or similar field; a Master’s degree is a plus

THE PERKS

  • A fun work atmosphere that values equity, diversity and inclusion
  • Competitive salary plus flexible health & wellness benefits
  • Hybrid work environment and flexible scheduling
  • Contract or permanent employment agreements – your choice!
  • A convenient, central location in Calgary but open to remote from anywhere in Canada


Subscribe to Job Alerts