Software Engineer-1715

Remote Full-time
About the position FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we’re making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job Summary Job Description DUTIES: Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team; perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; process data using Python and Shell scripts; optimize performance using Java Virtual Machine (JVM); architect and integrate data using Delta Lake and Apache Iceberg; automate the deployment, scaling, and management of containerized applications using Kubernetes; develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitor applications and platforms using Datadog and Grafana; store and query relational data using MySQL and Presto; support applications under development and customize current applications; assist with the software update process for existing applications, and roll-outs of software releases; analyze, test, and assist with the integration of new applications; document all development activity; research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports; monitor and evaluate competitive applications and products; review literature, patents, and current practices relevant to the solution of assigned projects; collaborate with project stakeholders to identify product and technical requirements; conduct analysis to determine integration needs; perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements; and build CI/CD pipelines to automate the quality assurance process and minimize manual errors. Position is eligible to work remotely one or more days per week, per company policy. REQUIREMENTS: Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; processing data using Python and Shell scripts; developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitoring applications and platforms using Datadog and Grafana; and storing and querying relational data using MySQL and Presto; of which one (1) year includes optimizing performance using Java Virtual Machine (JVM); architecting and integrating data using Delta Lake and Apache Iceberg; and automating the deployment, scaling, and management of containerized applications using Kubernetes Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Responsibilities • Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team • Perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Process data using Python and Shell scripts • Optimize performance using Java Virtual Machine (JVM) • Architect and integrate data using Delta Lake and Apache Iceberg • Automate the deployment, scaling, and management of containerized applications using Kubernetes • Develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitor applications and platforms using Datadog and Grafana • Store and query relational data using MySQL and Presto • Support applications under development and customize current applications • Assist with the software update process for existing applications, and roll-outs of software releases • Analyze, test, and assist with the integration of new applications • Document all development activity • Research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports • Monitor and evaluate competitive applications and products • Review literature, patents, and current practices relevant to the solution of assigned projects • Collaborate with project stakeholders to identify product and technical requirements • Conduct analysis to determine integration needs • Perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements • Build CI/CD pipelines to automate the quality assurance process and minimize manual errors Requirements • Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Processing data using Python and Shell scripts • Developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitoring applications and platforms using Datadog and Grafana • Storing and querying relational data using MySQL and Presto • Of which one (1) year includes optimizing performance using Java Virtual Machine (JVM) • Architecting and integrating data using Delta Lake and Apache Iceberg • Automating the deployment, scaling, and management of containerized applications using Kubernetes Apply tot his job
Apply Now →

Similar Jobs

Support Engineer 2

Remote Full-time

DevOps Engineer (Virtual) in Philadelphia, PA in Comcast

Remote Full-time

Solutions Engineer 3 (Sales Engineering)

Remote Full-time

Associate Visual and Motion Designer

Remote Full-time

Golang Security Automation Developer

Remote Full-time

Cross Platform and Project Management Intern

Remote Full-time

Communications Manager job at Duke University in Durham, NC

Remote Full-time

PR and Communications Manager

Remote Full-time

Sr Manager, Communications

Remote Full-time

Communications Manager (North America)

Remote Full-time

Underpayment & Overpayment Collector – Healthcare

Remote Full-time

Experienced Data Entry Specialist – Remote Work From Home Online Data Management Opportunity at arenaflex

Remote Full-time

Associate Provost for Pre-K-12 Education Initiatives and Strategic Partnerships

Remote Full-time

Entry Level Customer Service Representative – Delivering Exceptional Service and Driving Customer Satisfaction at blithequark

Remote Full-time

Senior Product Manager, Employee Lifecycle

Remote Full-time

Vice President of People

Remote Full-time

Experienced Student Customer Service Coordinator – Facilities and Recreational Services Support at arenaflex

Remote Full-time

**Experienced Data Entry Specialist – Remote Opportunity with arenaflex**

Remote Full-time

Experienced Remote Customer Support Agent – Delivering Exceptional Service and Solutions from the Comfort of Your Own Home at blithequark

Remote Full-time

[Remote] Workday resources with HR

Remote Full-time
← Back to Home