Hacker Newsnew | past | comments | ask | show | jobs | submit | drydenwilliams's commentslogin

Market-based vs. location-based carbon accounting strikes again!

Does the market-based approach let Google claim lower emissions? If they’ve bought offsets or RECs, but it doesn’t reflect the actual grid mix at the time a prompt runs.

Location-based numbers give a truer picture of real-world emissions.


Apparently the internet frowns upon asking for upvotes directly.

So instead, here’s an interesting data point: shifting your CI/CD jobs to greener regions can cut emissions by up to 90%. If that’s worth an upvote somewhere,

I’ll let you decide where to click it ;)


Would love to hear your thoughts, especially if you've made region-level decisions for training infrastructure. I know it’s rare to find devs with hands-on experience here, but if you're one of them, your insights would be gold.


Saw some pretty eye-opening research from Verisk Maplecroft about rising climate risks for global data centres. Thought it was worth bringing here since a lot of us depend on multi-region infrastructure without really tracking these risks.

Key points from the report:

- Over 50% of the top 100 data centre hubs already face high heat and water stress risks. - By 2040, nearly 75% will be in extreme heat zones, with major jumps in cooling demand and operating costs. - Cooling already makes up ~40% of total data centre power usage, and that’s expected to increase sharply. - A typical mid-sized DC burns through 1.4 million litres of water per day for cooling. With rising temps, that’s going to get worse. - By 2030, more than half of these hubs will also be in high water stress areas, meaning not just cost issues but also risks of outages, or even political blowback from local communities.

Thought-starter:

Is anyone here factoring in environmental stress when designing multi-region failover or disaster recovery setups? Right now, most of us optimise for latency and cost, but it feels like these environmental risks are going to creep into operational reliability decisions, especially with regulatory pressures increasing too.

Tools like CarbonRunner.io that route workloads based on both carbon intensity and water stress data, turning off regions with unsustainable risk factors. Curious if anyone else is doing something similar or if this is still too early-stage for most teams.

Source (Maplecroft report): https://www.maplecroft.com/products-and-solutions/sustainabl...


I've got a few questions about CI/CD-land and wondered if you could help.


This website is carbon-aware, changing its features based on live grid intensity.


A demo of how you can compare different companies website carbon emissions over time and using different website performance metrics like Google Lighthouse.


Very good point! Interestingly though that 350 KB is still smaller than the 441 KB average JavaScript for a site in 2020.


which is a pity for everyone calling themselves "web developers"


I agree, think you might like this that I've been making: https://ecoping.earth/indexes/gb/digitalagencies


Yeah definitely agree, currently it's just transferred size. But still 75 MB is absolutely massive. The largest site we've seen so far.


It's a really nice solution for experiments but I've found it quite difficult to get people to adopt this CSS approach in some companies (regardless of any cross browser implications). Everyone needs to be on the same page and of course be up to date with CSS3 animations, which can be over looked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: