I think it's being downvoted because it's a little too simplistic to be meaningful. Everything we do, literally everything, we do because it makes us feel good, and therefore everything is a transaction. We provide food and shelter for ourselves because, well, we enjoy those things. We help friends and family in need because we derive pleasure from helping others and seeing our loved ones flourish.
If we all agree on that basic universal law of human behavior, we can talk about higher level meaning about the role of an employer in society beyond the simple transaction of money for labor.
We can talk about philosophy all we wish. But reality is that I can’t take the philosophy and exchange it for goods and services. Whatever you think the role of an employee should be, we have to accept reality for what is and take responsibility accordingly.
My wife and kids had no desire to talk prosaically about the role of the employee when I got laid off a decade ago.
They were assured when I told that we had 3 months worth of savings in the bank, that my resume was updated and I had a strong network and could get a job quickly - I had an offer in less than a week at another company. While now admittedly it would take longer, I’ve also reduced my expenses.
You’re 100% correct but it’s easy to dismiss because it devalues individual perception of how one thinks their company values them. Most of us aren’t special is the reality. Few are special and irreplaceable.
Have you known of any company of any size to go out of business because one person left?
I assure you that if any of us got hit by a bus tomorrow, the company would send our next of kin flowers along with “thoughts and prayers” and have an open req for our position before our body got cold.
Three months later, you would only see our name brought up in the occasional “git blame”
By your definition, the only way someone "cares" about another person is if they are prepared to suffer any and every possible loss to protect said person from harm?
You mean like laying people off so the company doesn’t go out of business?
Whether the company “cares” about me is irrelevant. I need for them to put money in my account at the agreed upon intervals. I have family that “cares” about me.
Thought experiment: if the company you worked for that you thought “cared about you” told you they couldn’t pay you would that care be reciprocated and would you work for free? A 30% pay cut?
You state (or at least imply) that the fact that a company is not willing to go out of business to avoid firing someone is proof that they don't care about that person. All I can say is that you seem to have a reductionist definition of the word "care". I also wouldn't expect my friends to, say, sell their house to support me if I needed it.
I’m saying whether the company cares about me is irrelevant. The “care” from my company is not where they exist in my hierarchy of needs. That’s what my family is for. They exist solely to provide money to support my need for food and shelter.
My wife “cares” about me and I her. I know for a fact that she would make sacrifices for me even it put her in a bad spot temporarily.
I’ll leave my job at the drop of a dime and they will get rid of me as soon as they see my contribution to the bottom line isn’t beneficial to them. I would hope that the bar would be much higher for my marriage - ie “the person I want to care about me”
On modern Thinkpads, less that 0.5% of the battery per hour is expected, so if you disable automatic suspend to disk (aka "suspend to both") to save a few TBW from your NVME, expect to lose about 10% per day.
Personally, I like that Windows suspend to disks can be setup to only kick-in if a specific power budget has been exhausted: if the laptop has been sleeping for 5 days while disconnected, with 50% of the battery gone, it's neat to suspend to disk so that a week later (or more) it has enough power to resume work.
yeah I really can't fathom how the material design won out. it's so.... gimmicky. like when you click a button, and it radiates out a ripple across the button surface.
Sounds like they've been bitten by IEEE 794 floating point problems. JS only supports encoding numbers that are representable in 64-bit ("double precision") IEEE 794. Most JSON standards make the same assumption and define JSON number to match. (There's no lack of a standard an "encoding" standard there, it just inherits JS', which is double precision IEEE 794.) Some JSON implementations in some languages don't follow this particular bit of JSON standardization and instead try to output numbers outside of the range representable by IEEE 794, but that's arguably much more an "implementation error" than an error in the standard.
This is a most common occurrence in dealing with int64/"long" numbers towards the top or bottom of that range (given the floating point layout needs space).
There is no JSON standard for numbers outside of the range of double precision IEEE 794 floating point other than "just stringify it", even now that JS has a BigInt type that supports a much larger range. But "just stringify it" mostly works well enough.
The JSON "Number" standard is arbitrary precision decimal[1], though it does mention that implementations MAY limit the parsed value to fit within the allowable range of an IEEE 754 double-precision binary floating point value. JSON "Number"s can't encode all JS numbers, since they can't encode NANs and infinities.
The "dual" standard RFC 8259 [1] (both are normative standards under their respective bodies, ECMA and IETF) is also a useful comparison here. It's wording is a bit stronger than ECMA's, though not by much. ("Good interoperability" is its specific call out.)
It's also interesting that the proposed JSON 5 (standalone) specification [2] doesn't seem to address it at all (but does add back in the other IEEE 754 numbers that ECMA 404 and RFC 8259 exclude from JSON; +/-Infinity and +/-NaN). It both maintains that its numbers are "arbitrary precision" but also requires these few IEEE 754 features, which may be even more confusing than either ECMA 404 or RFC 8259.
One example that's bitten me is that working with large integers is fraught with peril. If you can't be sure that your integer values can be exactly represented in an IEEE 754 double precision float and you might be exchanging data with a JavaScript implementation, mysterious truncations start to happen. If you've ever seen a JSON API and wondered why some integer values are encoded as strings rather than a native JSON number, that's why.
"onMouseMove" is a user signal that you would want to be fast. also signals should be used for more than just user input interactions, like "download progress" etc..
Objective-C was doing sufficiently-fast UI updates for it to run well on iPhones 10 years ago, while relying on objc_msgsend, which is _much_ slower than a virtual function call or even a Qt signal.
You wouldn't want to use ObjC's message sending OR Qt's signalling mechanism in a tight inner loop – hell, you probably don't want to deal with the indirection of the vtable incurred by a virtual function in a tight inner loop. But all of these are more than fast enough for interactive UI work.
> objc_msgsend, which is _much_ slower than a virtual function call or even a Qt signal.
objc_msgsend is slower than a virtual function, but like 1.1x-2x, not 10x. (In rare cases it can even be faster due to it being a little more predictor-friendly.)
There's a chart of various timings here [1] and objc_msgSend is actually pretty efficient (it's received a lot of optimization over the years for obvious reasons).
A cached IMP is faster than a C++ virtual method call (both of which are slower than a non-virtual method call, inline code, or a C function call of course).
I like to bang on the drum that as a programmer, you need to understand the sheer number of orders of magnitude you're spanning more than the average programmer does. We so often deal in "10x slower" and "100x" slower that we can forget that it just doesn't matter if we're doing it even a mere 60 times a second. 10x slower on a process that takes 100ms is a problem. 10x slower on a process that takes 10ns requires a lot of looping to become a human-scale problem. There are things that can attain that level of looping, certainly, but it's not everything.
A good programmer ought to have read that sentence and instinctively observed that between 100ms and 10ns is a full seven orders of magnitude. For two numbers that at the human level may seem not terribly far away from "zero", there's a lot of space between them.
onMouseMove is normally delivered at the video frame rate, 60 fps. The OP's benchmark shows it can deliver around 60M signals per second, so it uses about 1/1000000 of the CPU time. Seems tolerable.
Even if it was delivered at the polling rate, that should never be higher than 1kHz (otherwise you deserve whatever performance issues you get). A virtual function call is 15ns conservatively, so say a signal is 150ns. 1000x is <150us of wasted time, well below observable overhead in any human-centric application.
As for download progress etc.. I don't think I have ever had to worry about speed of a function call ever - as long as I was leaving it to the event loop take care of it.
The slowest number mentioned in the post--"32,562,683 signals [per second] with sender"--works out to about 31 nanoseconds. That's around half a dozen orders of magnitude less than an amount of latency that would be noticeable to a human.
10 times faster than a virtual call is still ridiculously fast on any general purpose CPU made in this century. We often forget how insanely fast they are (and at the same time, I have no idea how we manage to add as much bloat to certain stacks that they make the processor struggle).
Like, I’m fairly sure you could have decent latency if your callback function for onMouseMove made a local network call to another process.
Also, how fast do you think download progress should update? Animating it to the next keyframe every second is much more than enough.
In Qt, those sorts of input interactions are mostly handled through virtual function calls, not signals. You're basically referring to QWidget::mouseMoveEvent. https://doc.qt.io/qt-6/qwidget.html#mouseMoveEvent
Drawing only happens (at most) once every 4ms. I’m not aware of any modern display technology that allows you to manipulate a frame buffer during the display interval (unlike CRTs which could be manipulated during a scan).
The retro gaming community is obsessive about input and display latency and even there anywhere between 5-16ms (16ms being one frame of 240p content) is considered acceptable for even the most hardcore twitch response games.
That’s not saying that other processes aren’t happening faster than that, just that human input and subsequent visual feedback maxes out somewhere between 200-300 times per second and for the vast majority of humans, it is far, far lower.
If you measure the response of individual photoreceptors, it takes 25-50 ms to peak after a flash of appropriately-colored light; the precise number depends on the color and intensity of the light. After that, the signal still needs to propagate through a bunch of visual brain areas, and then even more needs to happen to somehow influence behavior. With everything tuned just so, you can complete that whole process in 100 ms or so, but the conditions have to be perfect; otherwise, 300+ ms between (simple) stimulus and (simple) response is more typical.
Obviously, a lot of this is happening asynchronously, and high refresh rates can help in other ways (e.g., by smoothing out movement), but it astonishing how laggy our visual system is.
I wrote an interactive QtPy program- it receives video frames over the net and allows the user to interact (steering a microscope) in real time. I use millisecond timers (which generate signals delivered to slots) all the time.
After doing a bit of tuning I was able to steer the microscope with no visible latency, which means I'm handling user events at ~25FPS or higher and not seeing any high variances. The only problems I have are when the handler that receives a signal takes longer than I have budgeted (IE, more than 1000/25).
why do you keep posting this over and over again in this thread? The department applied this rule to attempt to reduce turnover, not because they literally want dumb cops.
I never said they want dumb cops, just that the person posting is not who they want because they're too smart. The way they limit turnover is by selecting for normal IQs so they don't get bored and leave.
Seems like the smart thing to do would be to give the "too smart" people training to tackle more interesting and important problems - like white collar crime, tax loopholes, internal corruption, etc.
I'll give you three guesses why that's never even floated as an option...
To be fair, the white collar crime is mostly tackled by other organizations who have investigators but aren't typical police. Like the FBI (accountants comprise a large percentage of agents), SEC, IRS, etc.