Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been using DO for a few months. I recently set up an AWS instance to use as a proxy, as I wanted something cheap in East Asia temporarily, and AWS's Tokyo region fit the bill.

Holy crap is AWS bizarre to set up. I got it done without too much trouble in the end, but man, the UI is just atrocious. It took way too long just to find the right place to go to get started with the process.



I felt the same way for a while, but eventually, while working on a large cloud-based project, I had an epiphany: the GUI is basically the phpMyAdmin interface of AWS.

phpMyAdmin is great when you're first learning to work with the LAMP stack; it lets you fiddle with values, manually create databases, munge columns, etc.

But.

If there are multiple instances of your system running "in the wild" (even just within your own company), and their versions will ever have even the slightest chance of getting out-of-sync? You really want the entire change-history of your modifications to the database checked into git. You want to be able to take any previous sorta-known database state, and move it toward the current well-known database state. To do that, you need programmatic migrations.

And as soon as your software ships a "db/migrate" folder, (non-readonly) phpMyAdmin access becomes anathema to proper configuration management.

AWS can be thought of as a big database that contains your EC2 instances and snapshots, your S3 buckets, etc. At scale, you only want to interact with this database programmatically.

So: if this was the correct hypothesis for why the GUI sucks, what would be an expected prediction? That the programmatic APIs for manipulating AWS would be great.

And they are! All the HTTP APIs for AWS--the ones you would use in any automated provisioning script--are simple, clear, and pain-free. They're definitely the "first-class" road to AWS, documented to heck, and it's clear that Amazon itself dogfoods them directly.


I would suspect AWS makes most of their money from clients that manage instances that are in the mid to high double digits and beyond. For these clients its likely that at some point automation has kicked in and the APIs get the job done much better versus having someone spend time clicking away in a browser.


Most of their revenue, probably.

But I think there are plenty of mid-sized comapnies with multiple AWS accounts, with many machines in each one where no one is quite sure what they do or if they can be switched off.


Sounds like they need to hire someone to take care of that then.

Disclaimer: Sysadmin/Devops


I know this is the case with probably all major clients to them. If I remember correctly from talking to Dropbox employees during a recruitment visit, they had spoken about how they helped Amazon AWS reach milestones as they accounted for approximately 50% of AWS's file storage service usage. Obviously this is done all programmatically as everything is at an extremely high scale.


But DigitalOcean also puts their APIs forward. It made me feel like: Wow, it's nice to use and they're geeks!

So how does DO's api compares to EC2?


They're a billion times saner, they have a very simple REST API that just takes a couple GET parameters and spits out nice JSON. I built a wrapper in scala using the dispatch http library and json4s in about 15 minutes, most of that was writing a couple functions to handle the difference between the formatting conventions used by DigitalOcean's JSON and scala/java standards ("droplet_id" vs "dropletId").

Meanwhile AWS has this nasty outdated SOAP API that you don't want to touch by hand with a 10-foot pole, you had better have a good AWS library for your existing language of choice or know how to make sense of a WSDL.


> AWS has this nasty outdated SOAP API that you don't want to touch by hand with a 10-foot pole, you had better have a good AWS library for your existing language of choice or know how to make sense of a WSDL.

Um, no. Every AWS service has a REST API. The only difficult thing is calculating authentication headers, but that's nitpicky for a good reason.

S3's API is extremely RESTful -- not perfect, but really good. Some of Amazon's newer APIs aren't as RESTful, but they're usually still better than what many people call "REST" today (i.e. "Hey it's not SOAP, so who cares if every flipping thing is a GET request, it must be REST!").


Ironically, your comment on the outdated SOAP API is based on knowledge that is quite outdated. AWS has had a REST API for basically the entire existence of DO.


I've never really used phpMyAdmin but I think I take your meaning. And I've always assumed that AWS made way more sense at large scale, since that's what it seems to be aimed at. So what you're saying makes a lot of sense.


I would go one step further and say that both of them are missing critical features. I recently posted a more holistic discussion of cloud devops at https://news.ycombinator.com/item?id=7384393 but it received no comment.


[deleted]


Ok I got way off topic. I don't understand why Amazon.com is so intuitive but Mechanical Turk and AWS are so "bizarre".


I work with a few ex-AWSers. Apparently work conditions are terrible and there's a lot of churn. That would explain why the design is so inconsistent.


Keep in mind that I was writing this as someone who mostly uses Riak these days. ;)


I think you will find that was OP's point.


heaven help you if you manage to start an instance that has n permanent storage then reboot it. apparently that's the default, and you need to (imo) go out of your way to get permanent storage. I understand the use case for the former, but the UI and docs for the latter use case were, last I looked, pretty difficult to grok.

Yeah, AWS has some functionality/scaling features that DO and Linode don't, but it's a smaller set of functionality than I think some people initially assume.


Rebooting an EC2 instance never causes you to lose your data (no matter what type of storage you pick).

EC2 has two types of storage: local disk "ephemeral/instance storage" and network storage "EBS". Digital Ocean only provides local disk. In EC2 you can mix and match, but EBS is the most convenient.

If you do a stop and start on EC2 (very different to picking "reboot"). You will lose what's on local disk because stop/start is basically requested that you be moved to a different physical box. Data stored on EBS is not affected. On Digital Ocean if you "power off" you remain on the same host, but get billed at full rate whereas on EC2 you don't get billed for the time it's stopped. Rebooting on EC2 is the same as rebooting on DO.

If you use EBS for storing everything your data survives even if the hardware your instance is running on breaks. If the hardware breaks on Digital Ocean or on EC2 if you only use ephemeral (local) storage you lose your data. It's the same story on DO as it is on EC2. The only difference is that EC2 gives you the option of storage which can survive the physical box you are on dying (EBS).


> If the hardware breaks on Digital Ocean or on EC2 if you only use ephemeral (local) storage you lose your data.

Is that really the case - neither are using a SAN or RAID or other backup method to ensure data is preserved in (almost) all circumstances? Not so bothered about EC2 (due to EBS) but for Digital Ocean..? Poor show if that's the case.


Its a bit deep to understand at first, but once you've been on the platform for a while, its pretty straightforward. I'm going to respectfully disagree with the "smaller set than people initially assume" -- there are huge differences.

Some examples:

* DNS can be set up to round robin * auto-scaling can be built in * load balancers are available and easy to use * For companies with multiple employees various levels of access can be granted to the system via IAM roles * you can add and subtract hard drives from an instance on the fly (EBS volumes) * You can set those volumes up as RAID if you so desire. * You can easily move IP addresses from one instance to another.

These are just the ones off the top of my head as I've been off AWS for about 6 months now. This doesn't even count how well the servers play with other amazon web services, like SQS, SNS, SES and RDS (queue, notifications, email, database).

That being said, I use DO for a few small sites I host because I don't need any of the features AWS provides.


linode has load balancing available.

I'd think round robin DNS has nothing inherently to do with the infrastructure - that's a function of the DNS server, right?

Yes, there are some things AWS has that other services don't. My own experiences have been that people default to thinking AWS==cloud and that's the 'best' option, and this thinking was prevalent years ago, well before SES, SQS, RDS, etc were extra reasons for integration. They've got good services, but have done a hell of a branding job over the years as well.


That confused the hell out of me. I still don't actually understand it. I was just after an ssh host that I could bounce a SOCKS proxy through so it didn't really matter to me, but I tried to figure out what the different storage options meant and just couldn't.


Have you tried IBMs Cloud? When you're done with AWS you're still struggling with IBM ridiculous contract papers and aren't even anywhere near launching any servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: