Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried them out a few months ago looking to move my infrastructure from AWS to DO. I was experiencing terrible IO performance on a fairly expensive instance. Submitted a support ticket about it. Their response was 'this is normal'. Killed all my droplets and canceled my account the next day.

To me they are running a pretty transparent pump and dump scheme. Promise a great service, sell it at a loss, get a ton of users, do a shit job of supporting it, take on investment money, cash out early, profit.



They were having SERIOUS issues a few months ago. I had been scaling up to around $600/month in droplets, and almost canceled and walked away. The IO was horrible, packet loss was massive and droplets were crashing all the time. Fortunately, they've gotten much better again and I haven't had a single issue in 3 months.

I think they got really popular suddenly and couldn't handle the explosive growth.


I created a droplet that I was going to use for the production site. It is currently sitting in a totally crashed state. I have no idea when it crashed, or why, I just know it happened some time in the past month.

I am now nervous about moving to them.


Any server you have running in any kind of environment where reliability is a factor should have some kind of monitoring system. Maybe DO should offer this feature like others do, but it's not out of the question to have another droplet running a heartbeat monitor. That'd be akin to them charging another $5/mo for the same service.


I would of course do that before I migrated. I did have munin running, but without alerts. The problem is that is crashed at all. My production server has been up for 668 days, and the temporary DO server didn't last 30.

On the production server, I do ping the server every five minutes, but the customer complaints always reach me before that system.


You can do what nagios does: ping more frequently, but not send an alert until several pings have failed. It guards against a packet lost to the ethers, but still picks up a failed service. Maybe ping every 30 secs, four missed pings = raise alert?


Even better than vacri's suggestion: set up load balancing so that you always have a server to fall back on if one goes down. Once you get the alert, you can fix the down server without the pressure of getting a down website back up.


I have production servers that have been up 3+ years, and others that died within 30 days. On AWS. So having a similar experience on DO wouldn't worry me unless it was every node.


That's a pretty serious accusation. Do you have any other evidence beyond "This one instance I had performed badly"?


The evidence supplied is supports response "This is normal", not "one instance was bad".


I tried DO for a side project last year.

The Good: - easy setup via the Web UI - instances provision fast - They had a 960GB instance type for only $960/mo

The Bad: - Support was lackluster, kind of like "stuff happens, so your servers may or may not keep working" - DDOS mitigation was overly aggressive. A DDOS likely targeted at the prior owner of the IP caused DO to take the box offline for two days with no warning or even a note saying it had been done. It took ~30 hours for them to even figure out that they had done this. - No SAN storage, so no practical way to store data in excess of what the largest instance type will hold. This limits the utility of the largest instance type. - The killer: they have no capacity of 96GB droplets, so if our box died, we would be unable to restore from the DO snapshot. This ultimately caused us to migrate off of DO (to AWS).

So I still run a couple of nonessential projects there (total spend < $20/mo), but I wouldn't trust my paycheck to services run at DO just yet.


My benchmarks (and real world usage with a database I rough up pretty badly) say you're wrong.

So do most blog posts reviewing DO out there as well. Give it another shot, or document exactly what you did and I'm sure someone will figure out the issue.


Thank you for telling me about my own personal experience which you know nothing about. I guess if it didn't happen to you personally then it just didn't happen, right?


LOL, I have no dog in the fight. I just think that if you're going to call something a "pump and dump scheme" that gave you "terrible IO performance" when practically nobody else has such issues, you might want to add a bit more information. Maybe try helping someone who has your similar conditions where DO is not the best way to go.


No one's saying your personal experience is wrong, just your extrapolation to "this is a pump & dump scheme"


That their support organization does not care about actually supporting the product is very telling to me. Especially when you have somebody willing to spend $160/mo or more and you basically tell them to 'go away'.


That's definitely a negative and I appreciate you sharing the anecdote, but it's still an awfully big leap from one bad support experience to criminal enterprise.


Why is your particular anecdote worth more than everyone else's?


It isn't, but I'm not trying to negate the experience of anybody else either. If you had good luck with them then great, that is your experience. But don't come and tell me that what happened to me is invalid or didn't happen. Sorry, I do not keep exhaustive benchmarks for every poor product that I decide not to use. I can tell you that it was taking MINUTES to open and read the contents of a file that took seconds on AWS and my local development machine.


Same here. Have been tinkering with moving from Linode to DO for a couple years but keep getting scared and staying put. Linode remains solid.


Out of curiosity, which plan were you using? The basic $5 VPS?


HA! I was using their $160/mo option with 16gb of RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: