Hacker Newsnew | past | comments | ask | show | jobs | submit | rhco's commentslogin

It's always fun seeing other Kiwis on HN, but this is the first time I've seen my hometown mentioned!

I do agree with your point too: perhaps emotional stimulation is also important? That can be a lot less sharp, less well-defined, but just as enriching.

It sounds like GP has very high standards for their friends, which is not the point IMO. I think we should have friends to broaden our horizons and expose us to new things. Intelligence is only one part of that.


I am not a Kiwi but was in Rotorua once, just wanted to say it is a lovely town (there was a sign claiming it was the nicest town in NZ!) and I loved the geyser and boiling mud! Must be funny to live with the sulphur smell and not even noticing except when you leave and come back after a while.


If Nvidia did drop their gaming GPU lineup, it would be a huge re-shuffling in the market: AMD's market share would 10x over night, and it would open a very rare opportunity for minority (or brand-new?) players to get a foothold.

What happens then if the AI bubble crashes? Nvidia has given up their dominant position in the gaming market and made room for competitors to eat some (most?) of their pie, possibly even created an ultra-rare opportunity for a new competitor to pop up. That seems like a very short-sighted decision.

I think that we will instead see Nvidia abusing their dominant position to re-allocate DRAM away from gaming, as a sector-wide thing. They'll reduce gaming GPU production while simultaneously trying to prevent AMD or Intel from ramping up their own production.

It makes sense for them to retain their huge gaming GPU market share, because it's excellent insurance against an AI bust.



2023: Articles about transparency hidden behind a paywall.


Disable JavaScript


Yes! SSH certificates are awesome, both for host- and client-verification.

Avoiding Trust on First Use is potentially a big benefit, but the workflow improvements for developers, and especially non-technical people, is a huge win too.

At work, we switched to Step CA [1] about 2 years ago. The workflow for our developers looks like:

  1. `ssh client-hosts-01`

  2. Browser window opens prompting for AzureAD login

  3. SSH connection is accepted
It really is that simple, and is extremely secure. During those 3 steps, we've verified the host key (and not just TOFU'd it!), verified the user identity, and verified that the user should have access to this server.

In the background, we're using `@cert-authority` for host cert verification. A list of "allowed principals" is embedded in the users' cert, which are checked against the hosts' authorized_principals [2] file, so we have total control over who can access which hosts (we're doing this through Azure security groups, so it's all managed at our Azure portal). The generated user cert lasts for 24 hours, so we have some protection against stolen laptops. And finally, the keys are stored in `ssh-agent`, so they work seamlessly with any app that supports `ssh-agent` (either the new Windows named pipe style, or "pageant" style via winssh-pageant [3]) - for us, that means VSCode, DBeaver, and GitLab all work nicely.

My personal wishlist addition for GitHub: Support for `@cert-authority` as an alternative to SSH/GPG keys. That would effectively allow us to delegate access control to our own CA, independent of GitHub.

[1] https://smallstep.com/docs/step-ca

[2] https://man.openbsd.org/sshd_config#AuthorizedPrincipalsFile

[3] https://github.com/ndbeals/winssh-pageant


GitHub does have support for SSH CAs, but it's an Enterprise feature: https://docs.github.com/en/enterprise-cloud@latest/organizat...


That's very interesting, thank you for linking!


At work, we switched to Step CA [1] about 2 years ago. The workflow for our developers looks like:

  1. `ssh client-hosts-01`

  2. Browser window opens prompting for AzureAD login

  3. SSH connection is accepted

How is that simple, compared to `ssh -i .ssh/my-cert.rsa someone@destination --> connection is accepted, here's your prompt` ?


The former is discoverable: it doesn't require developers having ANY knowledge of command switches (no matter how basic) nor following a set of out-of-band instructions; the "how to" is included within the workflow.


ssh-add (once per session) gives users back that incredible convenience. If you wanted to rotate certs, you’d have to add each new one, of course.


The server could display that info when a user tries to log in via interactive authentication.


It’s the exact same command as a regular SSH prompt and it generates and uses the cert. that seems very simple.

Your command is disingenuous in that it only works if the certificate has already been issued to you. If you were to include issuance, your command would very much turn non-simple.


If I'm reading it right then there's a non-insignificant amount of setup necessary for the proposed approach anyway, generating and sharing a public key is much easier even for the customer/client.


This is simple because it doesn’t require you to take any specific actions to make new/different hosts accessible.

If you deactivate someone in AD, poof, all their access is magically gone, instead of having to remove their public key from every server.


What if you're ssh-ing from a headless client, like a raspberry pi or a VPS?


Then it doesn’t work but their developers are ssh-ing from their work laptops so it doesn’t matter. Something doesn’t have to be a solution for all use cases to be a good solution.


That is also the flow for Tailscale SSH

https://tailscale.com/kb/1193/tailscale-ssh/


If you are in the terminal and don't have access to a browser?


Not the OP but if anyone doesn’t have access to a browser in my org then I can safely say they’re not accessing from a company laptop and thus should be denied access.


You really never ssh from one remote server to another?


Not GP, but:

I do, however when I do this I make sure the certificate is signed with permit-agent-forwarding and demand people just forward their ssh agent on their laptops.

This also discourages people from leaving their SSH private key on a server just for ssh-ing into other servers in CRON instead of using a proper machine-key.


Agent forwarding has its own security issues, you're exposing all your credentials to the remote.

It's better to configure jump hosts in your local ssh config.


There's SSH agent restriction now.

[1] https://www.openssh.com/agent-restrict.html


In general for systems like this, you can open the browser link from a different host.

For example, if I've SSHed from my laptop to Host A to Host B to Host C then need to authenticate a CLI program I'm running on Host C, the program can show a link in the terminal which I can open on my laptop.


Having to interact with the browser every time I need to ssh to a machine would be extremely painful.

If key forwarding works, that might be workable.

I'm extremely wary of non-standard ssh login processes as they tend to break basic scripting and tooling.


These tools usually cache your identity, so you might only need to go through a browser once a day.


I suppose this could be solved by using the first server as an SSH jump host -- see SSH(1) for the -J flag. Useful e.g. when the target server requires public key authentication and you don't want to copy the key to the jump host. Not sure it would work in this scenario though.


SSHing from one remote server to another won’t be possible in a lot of environments due to network segmentation. For example, it shouldn’t be possible to hop from one host to another via SSH in a prod network supporting a SaaS service. Network access controls in that type of environment should limit network access to only what’s needed for the services to run.


I've seen the exact opposite configuration where it's not possible to avoid SSHing from one remote server to another due to network segmentation, as on the network level it's impossible to access any production system directly via SSH but only through a jumphost, which obviously does not have a browser installed.


You don't need the jumphost to do the auth for the target host. You use -J and the auth happens locally and is proxied through.


I can count on 1 hand the number of reasons I might need to do that and on each occasion there’s usually a better approach.

To be clear, I’m not suggesting the GPs approach is “optimal”. But if you’ve gone to the trouble of setting that up then you should have already solved the problems of data sharing (mitigating the need for rsync), network segregation and secure access (negating the need for jump boxes), etc.

SSH is a fantastic tool but mature enterprise systems should have more robust solutions in place (and with more detailed audit logs than an rsync connection would produce) by the time you’re looking at using AD as your server auth.


The CA CLI tool we use supports a few auth methods, including a passphrase-like one. It likely could be set up with TOTP or a hardware token also. We only use OAuth because it's convenient and secure-enough for our use case.


Never. I’ve been at this company for 8 years and owned literally thousands of hosts and we have a policy of no agent forwarding. I’ve always wondered when I would be limited by it but it simply hasn’t come up. It’s a huge security problem, so I’m quite happy with this.


Not sure why you'd get downvoted for this comment. This is likely very applicable for many orgs that have operator workstation standards -- they're some kind of window/osx/linux box with a defined/enforced endpoint protection measures, and they all have a browser. Any device I can imagine ssh'ing from that doesn't have a browser is definitely out of policy.


because both of you narrow visioned the scenario to what you do daily. it is a common use case to ssh from a jump server, use ssh based CLI tools and debugging. the issue stems from windows users who are coupled to GUIs. the behavior pattern increases IT and DevOps costs unnecessarily.

an alternative example: our org solves the issue with TOTP, required every 8 hours for any operation; from ssh/git CLI based actions (prompted at the terminal) to SSO integrations. decoupling security from unrelated programs. simple and elegant.


The -J parameter to say will transparently use a jump server and doesn't require the ssh key being on the third party server. I can't speak for tooling on step-ca but my employers in house tooling works similarly and loads the short lived signed cert into your ssh-agent so once you do the initial auth you can do whatever SSH things.


There are better ways to access remote servers than using a jump box. If you’ve gone to the lengths to tie SSH auth into a web based SSO then you should have at least set up you’re other infra to manage network access already (since that’s a far bigger security concern).

Plus, as others have said, you can jump through SSH sessions with on the client ssh command (ie without having to manually invoke ssh on the jump box).


As pointed out, whether or not you go through a jump host isn’t relevant. We all go through jump hosts as well.

Besides, neither me nor GP is saying this needs to be a universal pattern. We are saying that it’s a viable pattern for a lot of orgs.



With e.g Azure's CLI az you can specify a flag something like "--use-device-code" which shows a copy-pastable URL that you then can just visit in the browser (on a different device even).


This is a bit off topic, but does anyone know how the mechanism that triggers the web page prompt from an ssh connection actually works? Is it some kind of alternate ssh authentication method (like password/publickey) or something entirely out-of-band coming directly from the VPN app intercepting the connection?

Ever since I saw it in action with Tailscale I've always wondered how it actually works, and I guess if anyone would know they'd be on HN


OOB: ".. during the SSH protocol’s authentication phase, the Tailscale SSH server already knows who the remote party is and takes over, not requiring the SSH client to provide further proof (using the SSH authentication type none)." https://tailscale.com/kb/1193/tailscale-ssh/#how-does-it-wor...


Smallstep uses ProxyCommand [0]. Not sure how Tailscale does it.

0: https://smallstep.com/docs/ssh/how-it-works


> we've verified the host key (and not just TOFU'd it!),

How.

Specifically, what I cannot determine from their docs is how the VM obtains a host key/cert signed by the CA. How does the CA know the VM is who the VM says it is? (I.e., the bootstrap problem.)

(I assume that you also need your clients to trust the CA … and that has its own issues, but those are mostly human-space ones, to me. In theory, you can hand a dev a laptop pre-initialized with it.)


StepCA supports quite a few authentication methods, including an "admin provisioner" (basically a passphrase that can be pasted into the CLI tools' stdin).

Because each of our servers are bespoke, we can use the admin provisioner when the server is first being set up (and actually, Ansible handles this part).

I don't have experience with it, but StepCA also has Kubernetes support, and I imagine the control plane could authenticate the pod when a cert needs to be issued or renewed.


I can't say in the general sense, but with GCP you can retrieve the EKpub of a VM's TPM via the control plane, and then use https://github.com/google/go-attestation to verify that an application key is associated with that TPM and hence that VM


I like this solution, thanks for sharing. Just need to swap it with my own OIDC compliant federated authentication server.


One thing I've never understood about SSH certificates for client identification - it looks like it causes the requirement that _at some point_ ssh private keys and the certificate private key need to both be in the same place? And if this is the case, then doesn't that imply that you need to have a service where users upload their private key?

Which would mean you have one single point of attack/DOS/failure that needs to be kept utterly secure at all costs?


You give your public key (typically into ~/.ssh/authorized_keys) and then prove you have access to the matching private key as the essential part of the challenge. You always keep the private key.


I thought the way it worked was that the certificate signed with the certificate private key only contains the public key, and the ssh server, after checking the certificate is valid, validates that the client has the private key corresponding to the public key in the certificate.


Also - key forwarding. Private key is on your local, you can forward it through ssh so you can hop around from your next destination


Vault also supports both client and server ssh certificates [1]. I use terraform and vault to sign server certificates at creation time.

[1] https://developer.hashicorp.com/vault/docs/secrets/ssh/signe...


Requiring the use of a browser, though, limits the usefulness a bit.


Now try to automate that.


How do you get the browser to open? Does it work on all operating systems and ssh clients, such as Android's JuiceSSH?


That is the first thing he mentions in the post. :-)

The answer is "not yet". But, some of their LLVM PR's were accepted recently, which is a big milestone!


> At this point, I'd also like to make it clear that you do not need the forked compiler for our RISC-V-based chips, esp32c2, esp32c3, esp32c6 etc. You only need the forked compiler for our Xtensa-based chips, esp32, esp32s2 & esp32s3.

Would be cool to see the list (and amount) of pending changes required.

https://discourse.llvm.org/u/andreisfr/summary

This is about as much as I can find.

https://github.com/espressif/llvm-project/pull/62


For reference, the IEEE 802.15.4 spec is ~800 pages long. 900 pages does sound like a lot considering that Matter (AFAIK?) doesn't directly spec any hardware or transport details - those being covered in 802.15.4 and Thread.

Granted, we should remember that those 900 pages include base details that, probably, CSA are not planning to change in the foreseeable future. They need to be very thorough.

To answer your real question: device manufacturers will likely use the Matter SDK. It would be a huge undertaking for a smart-light manufacturer to re-write all of that code from scratch!


I haven’t read the spec, but I believe that some backwards compatibility is built in, and there’s a degree of complexity involved in making a robust framework that isn’t general purpose (like Wifi), but instead offers compartmentalisation of different device types and use cases.

I agree with the manufacturer concerns, but many were in a bit of limbo while Matter and Thread languished in draft RFC hell. The spec was always ‘coming by the end of the year’, and obviously impacted by the pandemic.

At least now, there is some certainty and a path forward.


I think this idea has some serious merit, but I do wonder what the roll-out would look like - how could it be implemented practically, considering the reputation-base value of degrees.

For example, the value of a degree from MIT is not just the degree itself, but also quality and depth of the course work. We assume that if a student passed with A+ grades, they have a solid understanding. But we also know that, i.e., MIT teaches CS in a way that's very applicable to the CS industry, including many bits of non-standard knowledge that are not tested in the exam.

Imagine that MIT decides to become a degree-granting institution, and I obtain an "MIT CS Degree". How would an employer know whether I learned at a top-quality education provider and gained deep knowledge that covers more than the exam ever could -- or if I self-taught and scraped through the exam with the bare minimum knowledge.

I guess MIT could structure their exams so that they cover the subject deeply, but to cover 2 years worth of intense learning, surely they would need a very long (maybe impractically long) exam period?

Maybe I'm over-thinking this - I guess an MIT student would list on their CV "2 years studying at MIT".

Anyway, I think this is a fantastic idea and I'm very interested to see what other HN users think!


i think the idea would be the opposite, that none of the schools are granting any degrees, but that degrees are granted by a new independent government funded institution.

then you can go to MIT to study, take tests and pass courses, but to get the actual degree you have to go elsewhere.

the school can provide transcripts of the classes you attended and any projects and tests you did.

if the company cares they can look at that, or maybe they are satisfied with a proof that you were enrolled at MIT for 4 years.

for a company that wants MIT graduates, the only risk would be that someone paid for being at MIT but not actually study there. i don't know, maybe that's possible if you live in boston already and pay only for one class per term, but again, they could find out through the transcript.

isn't that exactly what happens with the bar exam? you can study wherever you want, but in order to work as a lawyer you have to pass a state bar exam. later when people are interested in a lawyers credentials, they don't ask where you passed the exam, but where you studied.


I agree, that'd probably be the best model for most of the world, but it's unlikely to fly in the US. But that's ok: they can have commercial certifying entities, and otherwise, it could be a government function.


but that's the odd part, most of the rest of the world doesn't need that because they have government funded universities without a profit motive. the problem that this system adresses is the conflict of interest of for-profit schools. and that problem is most prevalent in the US and a few other countries where private universities dominate. essentially it is a system that should be used for any private schools to ensure their quality


I envisage the examination process being very thorough, with a combination of practical tests, in person cross-examination by a team of interviewers, essays, etc.

MIT could absolutely ask you any sort of question they would expect you to know, or ask you to do anything you’re expected to know how to do.

I envisage it being a multi-week process. I think for undergrad CS, maybe two weeks is enough? But it’s be up to the degree-granting institution how long they take and how it works. I can easily see it taking a variable amount of time too, depending on the applicant. A one-day multiple-choice vs a two-week hands-on assessment would mean the degrees would have quite different value.


This is one of my many peeves with Outlooks UX. You can do Ctrl+Alt+V, which gives you an option to paste unformatted, but it's much more finger-intensive than the other shortcut.


This is one of those rare times that you can trace it directly to Bill Gates. Another one is Ctrl+F as send and not find (https://devblogs.microsoft.com/oldnewthing/20140715-00/?p=50...)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: