ReadingRecruiter Since 2001
the smart solution for Reading jobs

Data Platform Engineer

Company: Penske Logistics
Location: Reading
Posted on: July 9, 2024

Job Description:

Catalyst AI™ is an industry first platform that allows customers to compare, diagnose and manage their fleets using the power of data science and Penske's deep business knowledge. This game-changing technology lets customers get apples-to-apples comparisons instead of static, aggregated industry benchmarks. Penske is the first to solve this need by leveraging AI and machine learning, robust fleet data, and our unique view on maintenance. This technology not only streamlines the fleet benchmarking process, but also delivers actionable, data-driven recommendations tailored to each customer's unique needs. In this role, you will support Catalyst AI and all the future generations of the product. Working with a diverse team, you will lead the technical design of complex components that support our business critical applications, while mentoring other developers on best practices in an effort to deliver our next generation of innovative solutions to our customers. What You Will Be Doing: The ideal candidate should have some experience managing Kubernetes clusters on cloud platforms such as AKS (Azure Kubernetes Service), EKS (Amazon Elastic Kubernetes Service), and GKE (Google Kubernetes Engine), along with proficiency in AWS services including EC2, CloudWatch, S3, IAM, VPC, and Secret Manager. This role requires a blend of infrastructure management, people management skills, administration, and DevOps skills to ensure the reliability, scalability, and security of our Kubernetes-based applications. Experience in the following technologies is desirable. Kubernetes Infrastructure Management Design, deploy, and maintain Kubernetes clusters across multiple environments (development, testing, production). Configure and optimize cluster performance, scalability, and reliability. Implement security best practices for Kubernetes infrastructure, including role-based access control (RBAC), network policies, and encryption. Monitor cluster health and resource utilization using monitoring tools such as Prometheus, Grafana, and Kubernetes Dashboard. Troubleshoot and resolve issues related to Kubernetes cluster operation, networking, and performance. Autoscaling (i.e., Karpenter) Infrastructure Automation Automate infrastructure provisioning, configuration, and deployment using infrastructure-as-code (IaC) tools such as Terraform, Ansible, or Helm. Implement CI/CD pipelines for automated deployment of containerized applications to Kubernetes clusters. Continuously improve deployment processes and infrastructure automation to enhance efficiency and reliability Troubleshoot and resolve issues related to Kubernetes clusters, containerized applications, and underlying infrastructure components, working closely with cross-functional teams DevOps Collaboration Collaborate with development teams to streamline the containerization of applications and ensure compatibility with Kubernetes environments. Provide guidance and support to developers on best practices for building and packaging containerized applications. Work closely with DevOps and IT teams to integrate Kubernetes clusters with existing infrastructure and systems. Monitoring and Logging Implement monitoring and logging solutions for Kubernetes clusters to track performance metrics, monitor application health, and troubleshoot issues. Configure alerts and notifications to proactively identify and address potential issues before they impact production environments. Monitor Kubernetes clusters and applications for performance, availability, and security using monitoring and logging tools such as Grafana, ELK stack, and Kubernetes-native monitoring solutions. Security and Compliance - Implement security controls and policies to protect Kubernetes clusters and containerized applications from security threats and vulnerabilities. Conduct regular security audits and assessments to ensure compliance with industry standards and regulations. Additional Skills Experience with Agile/Scrum methodologies and practices Exceptional communication and interpersonal skills with an ability to extract, translate and communicate meaningful information with management and peers. Strong technical documentation skills; workflows and support documentation. Strong automation and problem-solving skills and ability to follow through to completion. Strong Leadership and communication skills Qualifications: • Bachelor's degree in Computer Science, Engineering or Certification in DB2/OS/ networking • 3+ years of industry experience, a background in relational databases like Oracle/Teradata is preferred • Programming experience in Java training required • Experience with Java is required • Some hands-on database project experience is required • Experience with Linux scripting is required (Perl and Korn shell/Bourne Shell/ Python) • Experience in JVM/JDK configuration and tuning required • Complete understanding of the following is required: • Basic SQL commands necessary to perform administrative functions is needed • Backup/error recovery and disaster recovery of databases is needed • Real-time analytics, NoSQL technologies (e.g. HBase, Cassandra, and MongoDB) is a huge plus • Understanding of the following is preferred: • Pivotal Big Data Suite - Greenplum, Gemfire, Spring Cloud Data Flow, and Rabbit MQ • Environments: Amazon Web Services, Cloud Foundry, and vCloud Air • Big Data Technologies - Hadoop, Kafka, Zookeeper etc • Cloud Native technologies, principles, and techniques such as Kubernetes, microservices; 12-factor apps • Any knowledge of Big Data Technologies - Hadoop, Kafka, Zookeeper, HBase, Hive,MQ, Rabbit MQ, is a huge plus • Experience with Concourse will be helpful: Creating and establishing CI/CD pipelines or integrating with existing CI/CD pipelines for deployment of PCF/CF and related products • Ability to enforce the database policies, best practices, and standards • Understanding in production operations and dealing with production issues on VMware vSphere / ESXi and Public Cloud • Understanding of the cloud and on-premise infrastructure including (firewalls, load balancers, DNS, NTP, SAML, oAuth, Active Directory, storage systems) • Understanding of monitoring, alerting and analytics of systems, platform and application performance and usage (such as Dynatrace, Splunk) • Proven work ethic with the utmost integrity • Self-awareness, with a desire for constant learning new technologies • Self-motivated, passionate, empathetic, approachable • Outgoing, energetic, and upbeat • Regular, predictable, full attendance is an essential function of the job. • Willingness to travel as necessary, work the required schedule, work at the specific location required, complete Penske employment application, submit to a background investigation (to include past employment, education, and criminal history) and drug screening are required. Physical Requirements: -The physical and mental demands described here are representative of those that must be met by an associate to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. -The associate will be required to: read; communicate verbally and/or in written form; remember and analyze certain information; and remember and understand certain instructions or guidelines. -While performing the duties of this job, the associate may be required to stand, walk, and sit. The associate is frequently required to use hands to touch, handle, and feel, and to reach with hands and arms. The associate must be able to occasionally lift and/or move up to 25lbs/12kg. -Specific vision abilities required by this job include close vision, distance vision, peripheral vision, depth perception and the ability to adjust focus. Penske is an Equal Opportunity Employer. Job Family: Information Technology Address: 100 Gundy Drive Primary Location: US-PA-Reading Employer: Penske Truck Leasing Co., L.P. Req ID: 2408276 Date posted: 07/07/2024

Keywords: Penske Logistics, Reading , Data Platform Engineer, Professions , Reading, Pennsylvania

Click here to apply!

Didn't find what you're looking for? Search again!

I'm looking for
in category

Log In or Create An Account

Get the latest Pennsylvania jobs by following @recnetPA on Twitter!

Reading RSS job feeds