Join us

ContentUpdates from Siemens Gamesa Renewable Energy...
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Why "What Happened First?" Is One of the Hardest Questions in Large-Scale Systems

Logical clocks trackevent orderin distributed systems—no need for synced wall clocks. Each node keeps a counter. On every event: tick it. On every message: tack on your counter. When you receive one? Merge and bump. This flips the script. Instead of chasing global time, distributed systems lean int.. read more  

Why "What Happened First?" Is One of the Hardest Questions in Large-Scale Systems
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Easy will always trump simple

Rich Hickey’s classic “Simple Made Easy” talk is making the rounds again—as a mirror held up to dev culture under pressure. The punchline: we keep picking solutions that areeasy but tangled, instead ofsimple and sane. The essay draws a sharp line between that habit and a concept from biology: exapt.. read more  

Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

The Hidden AWS Cost Traps No One Warns You About (and How I Avoid Them)

Calling outfive sneaky AWS cost traps—the kind that creep in through overlooked defaults and quiet misconfigs, then blow up your bill while no one's watching... read more  

The Hidden AWS Cost Traps No One Warns You About (and How I Avoid Them)
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Kubernetes DNS Exploit Enables Git Credential Theft from ArgoCD

A new attack chain messes withKubernetes DNS resolutionandArgoCD’s certificate injectionto swipe GitHub credentials. With the right permissions, a user inside the cluster can reroute GitOps traffic to a fake internal service, sniff auth headers, and quietly walk off with tokens. What’s broken:GitOp.. read more  

Kubernetes DNS Exploit Enables Git Credential Theft from ArgoCD
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Kubernetes right-sizing with metrics-driven GitOps automation

AWS just dropped a GitOps-native pattern for tuning EKS resources—built to runoutsidethe cluster. It’s wired up withAmazon Managed Service for Prometheus,Argo CD, andBedrockto automate resource recommendations straight into Git. Here’s the play: it maps usage metrics to templated manifests, then sp.. read more  

Kubernetes right-sizing with metrics-driven GitOps automation
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Amazon EKS Enables Ultra-Scale AI/ML Workloads with Support for 100K Nodes per Cluster

Amazon EKS just cranked its Kubernetes cluster limit to100,000 nodes—a 10x jump. The secret sauce? A reworkedetcdwith an internaljournalsystem andin-memorystorage. Toss in tightAPI server tuningand network tweaks, and the result is wild: 500 pods per second, 900K pods, 10M+ objects, no sweat—even un.. read more  

Amazon EKS Enables Ultra-Scale AI/ML Workloads with Support for 100K Nodes per Cluster
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU Workloads

Kubernetes 1.34 brings serious heat for anyone juggling GPUs or accelerators. MeetDynamic Resource Allocation (DRA)—a new way to schedule hardware like you mean it. DRA addsResourceClaims,DeviceClasses, andResourceSlices, slicing device management away from pod specs. It replaces the old device plu.. read more  

Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU Workloads
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Lucidity turns spotlight onto Kubernetes storage costs

Lucidity has upgraded itsAutoScaler. It now handles persistent volumes on AWS-hosted Kubernetes, automatically scaling storage and reducing waste. The upgrade bringspod-level isolation,fault tolerance, andbulk Linux onboarding. Azure and GCP are next on the list... read more  

Lucidity turns spotlight onto Kubernetes storage costs
Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

The Quiet Revolution in Kubernetes Security

Nigel Douglas discusses the challenges of security in Kubernetes, particularly with traditional base operating systems. Talos Linux offers a different approach with a secure-by-default, API-driven model specifically for Kubernetes. CISOs play a critical role in guiding organizations through the shif.. read more  

Link
@faun shared a link, 3 months, 1 week ago
FAUN.dev()

Kubernetes VPA: Limitations, Best Practices, and the Future of Pod Rightsizing

Kubernetes'Vertical Pod Autoscaler (VPA)tries to be helpful by tweaking CPU and memory requests on the fly. Problem is, it needs to bounce your pods to do it. And if you're also runningHorizontal Pod Autoscaler (HPA)on the same metrics? Now they're fighting over control. VPA sees a narrow slice of .. read more  

Kubernetes VPA: Limitations, Best Practices, and the Future of Pod Rightsizing
It takes the brightest minds to be a technology leader. It takes imagination to create green energy for the generations to come. At Siemens Gamesa we make real what matters, join our global team.

Siemens Gamesa has a vision for renewable energy: we believe in the power of nature and technology. Help us to be ready to face the energy challenges of tomorrow and make a green footprint – join the team in creating a better future for us on our planet.

We focus on hiring the best people, wherever they may be in the world. We pride ourselves on the flexibility we offer to our employees and are committed to building a workforce that can grow with the company. Siemens Gamesa is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

In our culture of trust, we focus on empowerment, diversity and continuous learning. Valuing our people is what makes us one global team, with our colleagues’ safety at the heart of our organization.

Read stories from our employees, and get to know your future team: https://www.sebrochure.dk/Siemens_Gamesa_Renewable_Energy/WebView/

How to contribute to our vision

For our DevOps team within the Software Solutions division we are looking for a skilled DevOps Engineer to join our growing, and dedicated DevOps team. In the department, we are responsible for the SW test strategy, the enforcement of it, and the creation and maintenance of build- and test environments in which automated tests are executed. This involves both HIL setups and small to large virtual environments, thus handling both the hard real-time and large scalability requirements. We operate 2 data centers and 2 test labs.

The introduction of Continuous Delivery is a strategic goal of the department; our team has been given the responsibility of introducing it. You will join an inspiring and high performing multinational team.

As DevOps Engineer, your tasks and deliverables are mainly lead the development teams’ transformation to Continuous Delivery, i.e. implement their Chef based server stack, create the QA part of the Jenkins pipeline. The product and system level tests are run in either a VM environment or the physical HW. You will manage and optimize the utilization of these environments while reducing bottlenecks in the delivery pipelines.

Together with the rest of the DevOps team you develop state of the art SW tools and make the decision proposals. You implement these tools in the development projects for them to move faster. As member of the DevOps team you carry the operational responsibility of these tools.

You act as the technical expert across multiple development projects helping them in keeping their delivery pipeline running. The projects range from deeply embedded controllers over large SCADA server systems to central fleet management systems. Together with the rest of the DevOps team you ensure the operational side of our large HW and VMware test environment.

You will implement parts of our cybersecurity strategy by implementing relevant verifications in the pipelines and ensure a short lead time.

What you need to make a difference

Passion for renewable energy and a sense for the importance to lead the change.  We are looking for you, who wants to make real what matters and who wants to change the world towards renewable energy.

The ideal candidate holds an academic degree in IT, Computer Science or similar in combination with thorough practical experience with

Continuous Integration/Delivery and DevOps
Applying agile software development practices (e.g. SAFe & SCRUM)
GitLab, Gitflow, Artifactory, Jenkins, Docker, Kubernetes and Chef
You have strong DevOps understanding of the state of the art build and code level QA tolls for C#, C++ and web development.
You have basic knowledge of BDD, Cucumber - Ruby.
You are not afraid of asking questions and taking the lead in developing the team, methods and frameworks.

Most importantly is your passion for continuous delivery and smooth operations in larger organizations. You have a high professional competency and you have a desire to develop your skills. You have good collaboration skills and you can manage to act in different cultural contexts. To thrive in this position, you must have a strong result- and customer-oriented approach.

In return of your commitment we offer you…

Become a part of our mission for sustainability: Clean energy for generations to come. We are a global team of diverse colleagues who share a passion for renewable energy and have a culture of trust and empowerment to make our own ideas a reality. We focus on personal and professional development to grow internally within our organization. Siemens Gamesa offers a wide variety of benefits such as flexible working hours as well as home-office possibility for many colleagues, employer-funded pension, attractive remuneration package (fixed/variable) and local benefits such as subsidized lunch, employee discounts and much more.

Empowering our people

https://www.siemensgamesa.com/sustainability/employees

#Associate

How do you imagine the future?

https://youtu.be/12Sm678tjuY

Our global team is on the front line of tackling the climate crisis, reducing carbon emissions – the greatest challenge we face.