Hacker Newsnew | past | comments | ask | show | jobs | submit | airfreak's commentslogin

Try the new quorum queues, they don't have those issues.


Yeah that sounds about right. Of course if you had 200 connections and 50 queues you'd more likely be seeing 100000 msg/s. The number of connections and queues has a big effect on total throughput.


The old network partition problems people remember about RabbitMQ are solved by quorum queues.


Yes, but quorum queues don't have many of the features of classic mirrored queues.


I like the concept of error budgets. Start off by knowing what kind of quality and resiliency a system requires and design your test strategy around that. Means talking to the client about that.

I'm not going to invest a load of time in various types of automated test for an internal site with a form over a database that 2 users use for low priority work. The idea of 80%-100% code coverage for basic work like that seems like waste to me.

But for critical path of the eCommerce shopping experience I'm going to going to write all kinds of automated tests at multiple layers of the stack, right up to chaos/stress testing it, so that we know when black friday comes we can handle it.

I don't like dogma and TDD seems too dogmatic for me. I am very pro testing, having been both a QA, Developer and Ops engineer. I want the freedom to exercise my own expert judgement. The problem with dogma is that it makes Thinking take a back seat. Suddenly we have 80% code coverage enforced on a page that loads a grid from a table, going through a three layered monstrosity of code.


> I like the concept of error budgets. Start off by knowing what kind of quality and resiliency a system requires and design your test strategy around that. Means talking to the client about that.

If you have a client knowledgeable enough, then great. But most people who don't have an engineering background think that the correct number of bugs is 'zero'. It's really hard for them to get their head round something being - to some degree - buggy, and still being acceptable.


It all depends. For example with Apache Pulsar, tailing readers are served from an in-memory cache in the serving layer (the Pulsar brokers) and only catch-up readers end up having to be served from the storage layer (Apache BookKeeper). This is a little different from DistributedLog which always required going to BookKeeper for reads.

Apache BookKeeper can add additional latency to catch-up readers, on top of the extra hop, because the data of multiple topics are combined into each ledger. This means that we lose some performance from sequential reads. This is mitigated in BookKeeper by writing to disk in batches and sorting a ledger by topic so messages of the same topic are found together, but it still involves more jumping around on disk.

Also, BookKeeper allows the nice separation of disk IO. The read and write path are separate and can be served by different disks so you can scale your reads and writes separately to a certain extent.

For all those reasons, I would have loved to have seen Twitter look at Apache Pulsar and compare performance profiles with Apache Kafka.


Streamlio published their OpenMessaging benchmarks between Apache Kafka & Apache Pulsar here: https://streaml.io/pdf/Gigaom-Benchmarking-Streaming-Platfor...


I used a screen reader for a few years due to sight issues, these days I use a screen again with magnification. When you work without a screen you end up having to build up a mental model of the code, which you keep inside your head. When you navigate the code, you are doing it mentally, inside this cathedral you maintain in your mind.

So given that, the main challenge for me was code navigation. I used Visual Studio at the time and it allowed me to jump to method definitions, call references, jump to the start/end of a method etc. So the worst thing was long methods as 1) I had no efficient way of navigating them except to read each line 2) it was hard to keep track of all the things the method did. Breaking up code into smaller pieces with good naming of each method helped speed up my understanding and navigation of the code a lot. It also simplified the mental model in my mind.


As a person who can visually see, I want to emphasize that these goals: short methods (max 40 LOCs) and speaking names for all code entities are equally important to me. I want to spend as few time as possible reading the code. I want to spend most time on modifying and transforming the code. I do so by building up a mental model which I can change freely before typing the changes down. Sometimes it helps to write down the ideas to see how they "look like", however everything you say applies equally well to me as a non-blind programmer.

Another important trait of easy to read code is the rule of "Big picture" first. I.e. if you have a `main()` method you will find it first in my files, only then followed by sub methods which are called by main(). Same holds for types: first comes the class and then it's followed by any type which might be required for a class attribute. So, when opening a file you immediately get the big picture and depending on the granularity you want to build for your mental model you can continue reading by scrolling down. When the model is good enough, you can skip the rest of the file.

I find this pattern to the contrary of how others design their files. There are people who list all details and sub methods first before telling me only further down the road how these details are put together. This forces me to read a lot of code to build a blown up mental model which might not be necessary for the current task at hand.


I've just written a little parser I OCaml, and the file goes like this:

  Module declarations                         ( 3 lines)
  Parser type declaration                     ( 1 line )
  Implementation details you don't care about (47 lines)
  Actual grammar                              (20 lines)
In that order. In Haskell, I would probably have put the grammar at the top of the file, so you can get the big picture right away. But the language reads stuff from top to bottom, so I have to put the big picture at the bottom.

An alternative would be separate the details from the grammar, but then I would expose those implementation details in an interface, while in fact the rest of the program is only interested in one function that parses everything.

Or, you could write from bottom to top. It's OCaml, so you know the big picture cannot be at the top. Now a case could be made for a language that reads toplevel statements from bottom to top…


To be fair, C is kind of like that, too. You could move the declarations of static functions to the top, but then you'd still have declarations that prevent you from getting to the real meat.


That and poor/nonexistent syntax for macros are about my only 2 serious gripes I have with OCaml. The third was missing multithreading, but IIIC this seems to kinda be workable now with lwt IIUC?


In my career I have found programmers generally fall into two camps:

- those that prefer high-level overviews first, then drill down to learn details later

- those that want to thoroughly understand each smaller building block first before dealing with bigger picture concerns

For example, I jump right into projects and skim the docs and start hacking stuff together without learning the nuts and bolts, whereas my cofounder likes to read the theory behind the library, then read the source code before even starting to write a single line that uses the library. Both approaches are valid, I've just found most people tend to strongly prefer one approach or the other, and it's helpful to identify what a person's preferred approach is when working with them.


> When you work without a screen you end up having to build up a mental model of the code, which you keep inside your head. When you navigate the code, you are doing it mentally, inside this cathedral you maintain in your mind.

Interesting! I am not visually impaired (not beyond what can be fixed with reading glasses), but I have always worked like that with code.


To get some perspective on the difficulty of building this model, imagine you can see fine, but your screen can only display one word (token) at a time. You can navigate with arrow keys and a list of keyboard short-cuts.

This restriction applies to whether you are coding, browsing the web (on Stack Overflow), reading XML (shudder).

So the importance of a mental model is critical, because without maintaining context of where you are, you can get lost and spend too much time rediscovering your surroundings. The better your model the faster you can navigate and the less effort required in general.


I think you have nailed the reason why I work like that: I learned to program in the days of line editors. So it was quite like what you said, only not with a word at the time but a line at the time which is way too little context so you need something to offset that.


This might be a stupid question, but is there a kind of "modern" line editor, maybe with jump-to functionality, that would only show me one line of code at a time?

I wonder if programming like that would force an improvement in my code: kind of like programming through a keyhole, rather than the information overload of 20 documents open simultaneously.

Maybe an Emacs mode with Intellisense, that only showed a line at a time, with syntax highlighting?


Well, you could always set your window to be only one line high, that would have much the same effect.


Never used it standalone but I'd think that is what "ed" is?


Thank you for posting, it's interesting.

I wish a good blind programmer would write a book. I really think there are great gains to be made in programming, when we find ways to optimize the way we build and maintain our mental models of an application. I suspect blind programmers have some good insights to share.


IMO this is good engineering practice in general. It’s also super helpful for sighted programmers.


I really learned to appreciate the scale and horror of the war after listening to Dan Carlin's 6 part podcast series on it. I highly recommend it though it isn't for the faint of heart. https://www.dancarlin.com/product/hardcore-history-50-bluepr...


This is a fantastic series, and it's worth noting that it is free to download. Buckle in for a wild 24 hour ride...

Ghosts of the Ostfront similarly cannot be recommended enough.


My wife had HSCT to treat MS a year and a half ago. So far some symptoms went away, others stayed, but most importantly she hasn't got worse. Fingers crossed it will stay that way.


That's great news. My bro had MS. It's much more serious than AS. As I am intimately familiar with MS by proxy I'm curious which aspects improved and which ones didn't?


This is analysis is for people who want to understand the internals of Apache Pulsar, rather than a high-level overview of how to use the technology and how it compares to Apache Kafka.


This might just be the best, well-balanced talk on how agile has gone wrong, and ways to combat the decay.

I've seen a lot great teams and poor teams operate. A common set of components in great teams: Technical excellence, freedom coupled with accountability and giving a shit about the quality of their work.

Poor teams: lack of discipline, not caring and poor technical skills.

I've also seen great teams fall apart due to external influence. That influence was agile gone wrong, imposed from the top. Freedom: gone, but accountability remains. Technical practices valued by the team: deprioritized by the process. Giving a shit: demotivation followed by quitting or being fired.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: