Dems have lost to Trump twice and it looks like they want to run the same campaign strategies in future elections. They are relying too heavily on "trump bad" to win and I worry about what that will ultimately result in down the line.
In a nutshell, this is the problem with mainstream dems (and I include Newsom in this) looks an appearance matters a lot more than actual policy leadership.
The policies that actually affect people's lives, there's a lot of overlap for both mainstream dems and republicans.
I live in Idaho, and school teacher here are also extremely underpaid (My kid's teachers all have second jobs). Yet our state has magically found $40M to give away to private school while it's also asking the public schools to find 2% of their budgets to cut.
In I think both cases, the solution is simple, give the teachers a raise and probably raise taxes to pay for it. However, both parties are fairly anemic to the "raise taxes" portion of the message and so they instead look for other dumb flashy one time things they can do instead.
Federal democrats have relied way too heavily on Republicans being a villain and vague "hope and change" promises to carry them through an election cycle. They need to actually "change" things and not just maintain the status quo when they get power.
The problem is in the middle of such a change it's hard to recognize if this is a real change or if this is another Wankel motor.
Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.
The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)
I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.
Yeah that's another rub. The current price is basically there in the hopes that in the future they can find revenue streams to maintain their current pace.
But even if the big companies ultimately go belly up, I think the open models are good enough that we'll likely see pretty cheap AI available for a while, even if it's not as good as the STOA when the bankruptcies roll through.
It says, for example, that it's impossible to manufacture batteries in California and cites Tesla moving to Texas as the example. But Telsa still makes batteries in California in Fremont. They last did expansions on their battery manufacturing plants in 2023.
It cites all the dangerous chemicals used in manufacturing, but those aren't banned in California. CA has safety requirements for handling toxic materials. And we should be safely handling those materials, it's crazy to suggest we don't because of progress or whatever.
You might be right, but the site is explicit about the Fremont plant being exempted, and opens with the claim that there are facilities grandfathered in.
The concept of "grandfathering" rule breakers has always seemed like naked corruption to me. OK, we think this thing is so bad, that we're passing a law to ban it, BUT everyone who was already doing this bad thing can keep doing it forever because... because... because putting an existing company out of business is apparently the worst thing in the world. If our elected officials think something is bad enough to ban outright, then it should go whole hog and actually ban it. Not just prevent upstart competitors to existing legacy industry.
It's not just for politics but fairness. You can't just one day up and decide to make something illegal that others depending on for livelyhood. It's good enough that it limits growth of the banned thing.
Sure you can. It just takes backbone, which is rarely found in the political class.
If I, as a voter, voted for a politician who promised to ban dumping mercury in the local river, I don't expect them to say "Oh, but any company already dumping mercury in the river can keep doing so, because we don't want to hurt people's livelihood." That's not what I voted for.
Ok, but if you are investing capital in some sort of production line or industrialization you are not going to want to do that in an area where you might just lose your entire investment instantly; instead, you're just going to invest it in Texas or China. Of course with more extreme examples like yours you do have to put some cost on the existing companies to get it fixed, but it would be something with a smaller cost like having to dispose of the mercury properly (whereas in this article's examples they just flat out ban these things, which you can't do to existing factories).
For sure there would be a disincentive to "invest" in the area where you might lose the investment. That would be intentional. As a voter, I specifically don't want companies to be making those kinds of "investments" in my region. Go "invest" your dirty industry in China. If California's reputation for harshly regulating these things prevents these kinds of businesses from opening here in the first place, I consider that Working As Intended. We could make that reputation even stronger by not grandfathering things.
Putting an existing company out of business means putting thousands of people out of work. That's the kind of thing that gets your party thrown out of office.
> Tesla's Fremont factory was the former NUMMI plant (GM/Toyota, operating since 1962). It was grandfathered in. When Tesla needed to expand battery production, they built the Gigafactory in Reno, Nevada — not California — because the permitting for battery cell manufacturing was effectively impossible. The Cybertruck factory went to Austin, Texas.
His point was that they were grandfathered in for making cars in general. But he flat out lies about making batteries being something grandfathered in. That wasn't a battery manufacturing plant to begin with.
And he further lies to say they had to build elsewhere because cell manufacturing was "effectively impossible" because they expanded the factory for cell manufacturing in 2023. [1]
I didn't read the text but if you’re referring to the quoted text, it’s not clear from the text that the implication was they were building batteries in _Fremont_ and then wanted to expand or that they were building them elsewhere and wanted to expand and chose Nevada as the expansion site. The sentence is not written with clarity. It’s written as people would speak.
> Tesla's Fremont factory was the former NUMMI plant (GM/Toyota, operating since 1962). It was grandfathered in. When Tesla needed to expand battery production, they built the Gigafactory in Reno, Nevada — not California — because the permitting for battery cell manufacturing was effectively impossible. The Cybertruck factory went to Austin, Texas.
What part am I misreading? How is it that tesla expanded their cell manufacturing in 2023 in California when it was "effectively impossible"?
It is a battery cell manufacturing line and with the intent to facilitate providing research resources to California battery business. Considering the claim was that you couldn't make lithium batteries in CA, having a new manufacturing line shows that's incorrect. It also shows the state sees it as a need. I haven't seen anyone provide any info showing it isn't possible.
At my work, one thing that I've often had to explain to devs is that the Parallel collector (and even the serial collector) are not bad just because they are old or simple. They aren't always the right tool, but for us who do a lot of batch data processing, it's the best collector around for that data pipeline.
Devs keep on trying to sneak in G1GC or ZGC because they hyper focus on pause time as being the only metric of value. Hopefully this new log:cpu will give us a better tool for doing GC time and computational costs. And for me, will make for a better way to argue that "it's ok that the parallel collector had a 10s pause in a 2 hour run".
Every GC algorithm in HotSpot is designed with a specific set of trade-offs in mind.
ZGC and G1 are fantastic engineering achievements for applications that require low latency and high responsiveness. However, if you are running a pure batch data pipeline where pause times simply don't matter, Parallel GC remains an incredibly powerful tool and probably the one I would pick for that scenario. By accepting the pauses, you get the benefit of zero concurrent overhead, dedicating 100% of the CPU to your application threads while they are running.
Gotta be honest, I have a hard time arguing for G1 over ZGC. It seems to me like any situation you'd want G1 you probably want ZGC instead. That default 200ms target latency is already pretty long. If you've made that tradeoff for G1 because you wanted lower latency, you probably are going to be happier with ZGC.
I also find that the parallel collector is often better than G1, particularly for small heaps. With modern CPUs, parallel is really fast. Those 200ms pauses are pretty easy to achieve if you have something like a 4gb heap and 4 cores.
The other benefit of the parallel collector is the off heap memory allocation is quiet low. It was a nasty surprise to us with G1 how much off heap memory was required (with java 11, I know that's gotten a lot better).
We have many apps that run on <1 core just fine for the business logic and run on K8S. If we then use a parallel or concurrent garbage collector it will eat through the cpu limit of the app in a blink causing the process not to be scheduled for several ticks. This introduces more latency than the GC cycles themselves would when using a serial GC than runs frequently enough.
What I want in windows is Notepad++ or Kate (and even Kate is a bit much). That's the full extent of the features that I'd want in something like notepad.
Adding RTF and a wysiwyg markdown editor is the last thing that I want from something like notepad. When I open notepad, I still want to see the characters that are present. Heck, I'd like to be able to see the difference between a space and a tab. I'd want to be able to see which type of line ending are being used (and switch to the correct one, \n) Hiding characters is antithetical to the reason I'd use notepad in the first place.
I want to be able to search text and see text. Not compose a document or talk to an LLM.
Oh I've ditched windows or I would go grab Kate (I use it on my linux box). I'm just commenting on how even if you were to enrich the features of notepad, the direction to take it is towards a kate editor and not towards an wordpad editor.
The main case I can think of is wanting some forum functionality. Perhaps you want to allow your users to be able to write in markdown. This would provide an extra layer of protection as you could take the HTML generated from the markdown and further lock it down to only an allowed set of elements like `h1`. Just in case someone tried some of the markdown escape hatches that you didn't expect.
I think this might be the answer. There's no point to it by itself (either you separate data and code or you don't and let the user do anything to your page), but if you're already using a sanitiser and you can't use `textContent` because (such as with Markdown) there'll be HTML tags in the output, then this could be extra hardening. Thanks!
I wouldn't trust myself to always remember to sanitize it, and in a company with more than one person, it becomes impossible to ensure it is properly handled.
Seems like this has a bunch of footguns. Particularly if you interact with the Sanitizer api, and particularly if you use the "remove" sanitizer api.
Don't get me wrong, better than nothing, but also really really consider just using "setText" instead and never allow the user to add any sort of HTML too the document.
Using an allowlist based Sanitizer you are definitely less likely to shoot yourself in the foot, but as long as you use setHTML you can't introduce XSS at least.
reply