Hacker Newsnew | past | comments | ask | show | jobs | submit | Blammar's commentslogin

I am unable to figure out how to use this.

I move blocks around. Some blocks I can attach to the bottom of "run this program" block and they clearly run. But I was unable to add any more blocks that did anything.

The screen is two dimensional so I was expecting to be able to put processing blocks anywhere. Sure I can but they do nothing.

What's missing (for me at least) is an explanation of the user interface.


The interface needs some work if you're not familiar with block based programming environments. Dropping things in place is a bit difficult compared to Scratch or Snap.

> Some blocks I can attach to the bottom of "run this program" block and they clearly run.

The blocks with a square side are commands you can attach to each other to make a program. The round blocks go in the round slots on other blocks.

> The screen is two dimensional so I was expecting to be able to put processing blocks anywhere. Sure I can but they do nothing.

You can click on them to run them without them being connected to a "run this" block. Whatever has a yellow outline is running.


Thank you. I got a little further this time. The UI needs a LOT of work to make it friendlier. It's difficult to enter numerical values, and often the values just go blank.

I suppose there's a reason node editors use a block and line interface mainly.

I'd actually prefer a tree-structured approach, where you could turn a subgraph into a single block, allowing you to structure appropriately.


Uh if it wasn't clear, I was asking HN for a bit of help. The editor does look very cool.


Jesus fucking christ don't give them ideas.


I'm one of the designers of the GeForce 256.

The hardware transform and lighting was an enormous step forward, and there was no other single-chip manufacturer that had that functionality. Yes, it took a while before the game developers learned to use the hardware well. We supplied the cart; up to them to get the horse attached and working...

I'm not going to argue the meaning of "GPU" with the other posters. Suffice to say our intent was to implement the entire graphics pipeline in hardware, allowing a nearly complete offloading of the CPU.

We demonstrated the GeForce 256 to SGI engineers, and showed that we could run their OpenGl demos at roughly the same speed they ran on their Onyx systems which cost about 100 times as much.

The linked Nvidia article, to be honest, is marketing fluff. It took several years before we figured out how to turn a GPU into a usable parallel computation engine; in the meantime we had enough effective programmability that people hacked up D3D and OpenGl programs to do some interesting work.


I was working at a small animation studio back when the GeForce 256 was released. I distinctly remember one our animators buying one, popping into a 'random' Wintel machine, installing Maya, and having it run many of our scenes comparable to our very expensive SGI and Intergraph workstations. Everybody instantly realised that this was the future. 2 years later virtually the entire studio was running on commodity hardware costing less than a quarter of what we used to pay for workstations.


Torus, of course, but how does that relate to enclaves, which are planar ?


You might want to colour the exclave the same colour as the rest of its country, even though no connection can exist between them on the plane.


It seems like you're using "enclave" and "exclave" interchangeably, which is causing confusion. What you're referring to is an exclave; an enclave is when one region is completely surrounded by another, like Lesotho or Vatican City.


You're right, my first post should say exclaves and the three examples I gave are all exclaves.


I read in Forbes about a construction company that used AI-related tech to manage the logistics and planning. They claimed that they were saving upwards of 20% of their costs because everything was managed more accurately. (Maybe they had little control boxes on their workers too; I don't know.)

The point I am trying to make is that the benefits of AI-related tech is likely to be quite pervasive and we should be looking at what corporations are actually doing. Sort of what this poem says:

For while the tired waves, vainly breaking / Seem here no painful inch to gain, / Far back through creeks and inlets making, / Comes silent, flooding in, the main.


Right, maybe it can improve the productivity of companies that are operating below the mean.


Apologies if this has been posted already, but my search fu failed to find a similar post.

1. Start the cancellation process for your current license

2. When offered a discount or to switch to another plan, choose the cheapest new plan

3. Once your membership is updated, start the cancellation process again immediately

4. The cancellation fee is now $0.

(Source: https://ui.dev/rwd/articles/cancel-adobe-without-paying-the-... )

I can vouch that this worked earlier this year.


but then you have been charged for the cheapest new plan lol.


Nice work indeed. However, was there a reason you didn't support gigabit ethernet? I haven't used 100mbit ethernet for more than a decade...


Hi! I'm Altan, another member of Murex. Many of the design decisions behind the switch were driven by the requirements of our underwater robot. In our case, the communication speed was capped by the transfer speed achieved over our tether (we use galvanically isolated OFDM to inject data over our powerlines). Since size and cost were our primary goals, 100mbit was more suitable than gigabit ethernet. While it would have been cooler to have a gigabit switch, it would also increase the size and cost.


I’m not sure what your background is, but 100Mb Ethernet is still rather common in embedded devices and applications where the network protocol is primarily intended to facilitate UART serial communication. Just as a general note for context, I will defer to their more specific answer for this particular application.

Neat project!


Agreed. Due to the long lifecycle of manufacturing equipment we still see a lot of 100mb out there, and it’s not even embedded.

I would note that all new products seem to be gbe or better.


As much as modern Ethernet standards are much nicer (my house is wired for and running 10Gb everywhere, with 40Gb Infiniband to a couple locations too), 100Mbps still has its place. Specifically, anything embedded, slow, and/or cheap. No reason to spend the extra money on 4 more wires and pins and trace routing if your microcontroller only sends a few packets/second.


At that price point, the cost of the RJ-45 port is probably more than the cost of the 802.3 chip and I wonder if the cost of supporting that old chip on a contemporary device doesn't surpass the cost of the components for a nowadays standard 1 GB/s.


If you're doing tethered ROV stuff, the weight of the teather is a big big deal, so adding 4 additional wires is a non starter. For the stuff that goes extremely deep, they use fiber because it's much lighter. It presents significant cost increases of course, which, you'd want to avoid if you can.



It's still mass, drag and cost.


GigE needs twice the pin count. IMHO, there's not much room on the board for any more i/o. Certainly gigabit is nice, but there's plenty of applications where 100M is more than plenty.


NVDA's forward PE is ~37, about what it has been for the past ~5 years I've been tracking that. So it's not overpriced based on that metric.

If you're convinced the stock is that overvalued, go short some or, if you like to live dangerously, buy some long-term put options (don't be an idiot and buy short-term options.)

I have no idea if NVDA is like Cisco Systems in 2000, or if it's something unique. What I am aware of is that there's around 5-7 trillion that were moved from stocks to t-bills since the Fed raised rates in March 2022. If and when they drop their rates back to the historical ~2.5%, it's reasonable to predict these funds will go back into stocks, which will presumably drive up prices.


That's exactly what I'm saying below - PE is still very high, hence projecting the past growth into the future. But the scale changed a bit. A 37 PE ratio is extremely high by historical standards - this was reserved for very promising, small startups. Not for 2T companies. I know this got distorted in the past 15 years by abnormally low interest rates, but sooner or later it will come back to something that makes sense.

Buying long-term put options on Nvidia now is extremely expensive - the stock was so volatile that the price you pay for those options almost annihilate any gains you could expect, even if the stock losses 50% in 12 months.

You got me curious about those 5-7 trillions. Where these numbers come from ?


People need to stop focusing on “historical standards”. For better or worse, retail entering the market en masse (often with options trading) has created a new standard. The market over the last 10 years is the new normal, the smart people have worked this out and are making huge returns. TSLA at its peak had a WAY higher PE than NVDA does now, and NVDA is just as popular, with even stronger fundamentals.


For better or worse, human psychology doesn't change that much - those historical standards very much apply today.

For us, older folks, we've seen this 'new normal' several times already - it will end up as usual. There are no free lunches and as it appears to me that have not entered any permanently high plateau.

It's even quite funny that ~100 years ago, we've had the previous big pandemic, and the biggest stock market crash. Epidemic of this century is done, now waiting for the second part !


No, that’s objectively wrong. You simply haven’t seen retail involvement in the market on this level before, not even close. It’s a new precedent, the market has changed.

To clarify, I’m not saying NVDA won’t crash from here or bear markets no longer exist. I’m simply saying that historic PE valuations are a poor metric for assessing the potential of a stock in todays market conditions.


Well, if you use the word "objectively", I would expect some numbers to back it up.

Yes, it's probably the first time that retail is allowed to trade options. But it's not the first time that retail is all in in stocks. I've tried to find a funny number to back it up - just check the Wiki on 1929 crash - https://en.wikipedia.org/wiki/Wall_Street_Crash_of_1929 - there was more money lent to 'small investors' so they can buy on margin ... than the entire amount of currency circulating at the time.

On average, all those retail guys will loose money - that's the sad truth. In the long term, the stocks simply follow the earnings - all other movements around this trends are pretty much a zero sum game - and most skilled operators are not loosing money in that game.

Price to earning ratio is just the number of years the company 'pays for itself' if you buy it. PER at 40s for big chunks of main indexes mean that either there will be tremendous progress in the economy that will boost the earnings or people are hoping to resell to a bigger fool.

Note: I work in finance, and I very much see the retail involvement in stocks. Hedge Funds and banks, are making a ton of money out of them, that's for sure.


Then be the options seller. You can sell cash secured puts, or a put credit spread, or a call credit spread. Calls are even more expensive than puts right now.


Selling options is an even worse idea.

Frankly, I don't understand why we made it possible for individuals to gamble by selling options. As Charlie Munger used to say, Wall Street will sell shit as long as shit can be sold.


You would be right about selling naked options. But call/put credit spreads have bounded downside, just like buying options.

Selling cash secured puts or selling covered calls would be less risky than just holding stock.


"What I am aware of is that there's around 5-7 trillion that were moved from stocks to t-bills since the Fed raised rates in March 2022."

This doesn't make much financial sense. Since every T-Bill is held by someone the amount of T-Bill outstanding is completely determined by government issuance. And since the amount of T-Bill outstanding will only grow over time, no "money" will ever flow out of it. Now if you narrow you inclusion of T-Bill holders to a specific group of people the amount this group holds could certainly go up and down over time. But then I wonder how you know this group sold stocks to buy T-Bills, and why this is more significant than the action of their counterparties: every share they sold was bought by someone else after all.


We are at a very unique time. The stock market has basically been in a bull market for 15 years with some very short-lived sell-offs along the way. During that time we've had some incredible innovations such as the iPhone, FANG stock dominance and unprecedented profitability for years.

You've also had three or four bona fide bubbles in that span, starting around 2017. First was Bitcoin along with the stock market as a whole (with Nvidia being one of the leading stocks of that bull market advance).

Then you had Tesla go parabolic and lots of people become rich. Then you had the whole post-COVID speculative mania.

The result of this has been extreme credulity by the average person. Today's keynote is the perfect summation of this phenomenon. I saw multiple people who almost certainly couldn't explain in any level of detail how Nvidia GPUs are used for training and inference, but rather rely on the secondhand talking points like CUDA that they've learned by watching Jim Cramer, watching this keynote with excitement and anticipating how much it would pump their shares or call options.

Contrast this with Steve Jobs keynotes from 15 years ago when Apple's best days were well ahead of them. Most keynotes were questioned, in some cases even mocked. When Tesla stock broke out, many people couldn't make sense of it. Ditto for cryptocurrencies. But now, taking their cues from those cycles, the average person wants to ride the next bubble to riches and is trying to catch the wave and so now believes every story attached to a rising asset price.

CEO's aren't blind to this and are using every opportunity to create favorable storylines. The leadup to a keynote like this carries with it an enormous amount of pressure to deliver. Hence a company like Nvidia leaning into generative artwork and straight up made up storylines like robot development.

At the end of the day, I'm afraid that there likely isn't all that much substance and the evidence is beginning to pile up that the megacap tech stocks have run out of ideas which is why they are laying off people en masse and appealing to the AI hype cycle to carry their stocks higher.

Consider that Nvidia has gone up 8x- 800%!- in just over a year. The cycles are moving faster and faster. I remember just a year ago when lots of people said Nvidia at $250 was insane. Now here we are with the stock at more than three times that level and most people are calling it cheap. The stock market seems to have in certain areas like semis, completely disconnected from the fundamentals and taken flight. Yes, Nvidia earnings have grown. But understand that this is all part of a positive feedback loop where tech CEO's are pressured by their competitors and shareholders to show that they are investing in AI. Thus they all talk about it on their earnings calls and spend massively. All of their stocks rise in unison as you have a market that increasingly looks like its chasing momentum stock trends up. Nvidia's moves of late have almost nothing to do with any fundamental developments in the company. It has been routinely trading upwards of $45 billion a day. The Friday before last that number was over $100 billion. These are absolutely insane figures. Compare that to Microsoft, the largest company by market cap in the world, which trades on average around $8 billion per day.

I think this is generally how bull markets end and I think we may be actually forming the top of the great bull market for the megacaps that began around 2010 but really hit its stride starting in 2017.


I’ll buy the credulity problem, and agree there is considerable risk in NVDA’s market position.

However they went up 8x because (neglecting crypto) they overnight transitioned from providing accessories to PC gamers and high end engineering workstations (both increasingly niche markets with tapering growth or decline) to being for the moment the only substrate of an entirely new consumer product segment that has seen the most rapid adoption of any new technology in the history of the world.

This could be the way things work now: the time constants shrink as the pipeline efficiency increases.


Good post, thanks.


Problem is that the only thing scrith can possibly be is strange matter. That's a big advance in material science.........


It could be conventional matter in the island of stability [1].

[1] https://en.m.wikipedia.org/wiki/Island_of_stability


The thing that is truly mindboggling to me is that THE SHADOWS IN THE IMAGES ARE CORRECT. How is that possible??? Does DALL-E actually have a shadow-tracing component?


Research into the internals of the networks have shown that they figure out the correct 2.5D representation of the scene before the RGB textures (internally), so yes it seems they have an internal representation of the scene and therefore can do enough inference from that to make shadows and light seem natural.

I guess it's not that far-fetched as your brain has to do the same to figure out if a scene (or an AI-generated one for that matter) has some weird issue that should pop out. So in a sense your brain does this too.


Interesting! Do you have a link to that research?


Certainly: https://arxiv.org/abs/2306.05720

It's a very interesting paper.

"Even when trained purely on images without explicit depth information, they typically output coherent pictures of 3D scenes. In this work, we investigate a basic interpretability question: does an LDM create and use an internal representation of simple scene geometry? Using linear probes, we find evidence that the internal activations of the LDM encode linear representations of both 3D depth data and a salient-object / background distinction. These representations appear surprisingly early in the denoising process−well before a human can easily make sense of the noisy images."


What does 2.5D mean?


You usually say 2.5D when it's a 3D but only from a single vantage point with no info of the back-facing side of objects. Like the representation you get from a depth-sensor on a mobile phone, or when trying to extract depth from a single photo.


It means you should be worried about the guy she told you not to worry about


I randomly checked a few links here and shadows were correct in 2 images out of a dozen... and any people tend to be horrifying in many


Stable diffusion does decent reflections too


Yes! It can also get reflections and refractions mostly correct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: