Meaning of Impossible 03 Feb, 2019 Comments

After two years of working together, I finally knew for sure what I had long suspected: “Stupid” was just an expression Feynman applied to everyone, including himself, as a way to focus attention on an error so it was never made again.

I also learned that “impossible,” when used by Feynman, did not necessarily mean “unachievable” or “ridiculous.” Sometimes it meant, “Wow! Here is something amazing that contradicts what we would normally expect to be true. This is worth understanding!”

Prof. Paul J. Steinhardt on What Impossible Meant to Feynman.


List Comprehensions in C++ 20 Aug, 2018 Comments List Comprehensions in C++

From Bartosz Milewski's What Does Haskell Have to Do with C++?: Haskell style list comprehensions of the form

one x = 1
count lst = sum [one x | x <- lst]

can be written in modern C++ like so:

template<class T> struct
one {
    static const int value = 1;
};

template<class... lst> struct
count {
    static const int value = sum<one<lst>::value...>::value;
};

The author calls out the comparison between one<lst>::value... and [one x | x <- lst]. I've used packs like this before, but looking at this juxtaposition was a revelation.


Parallels Between Physics and Distributed Systems Concepts 20 Aug, 2018 Comments Parallels Between Physics and Distributed Systems Concepts

The ACM Queue article titled Standing on Distributed Shoulders of Giants was a fascinating read. Some excerpts:

"Two-phase commit is the anti-availability protocol."

"Computing is like Hubble's universe...Everything is getting farther away from everything else."

"Shared memory works great... as long as you don't share memory."

This one took a bit of squinting at to make sense. Here, Pat Helland is referring to the fact that there are retries and given we store copies for redundancy, we can only know for certain where the request was processed OR that it was successfully completed. In a sense, once you know it was completed, you can back trace where it was done, but that is hindsight.

"In a distributed system, you can know where the work is done or you can know when the work is done but you can't know both."

This one's on eventual consistency:

"While not yet observed, a put does not really exist... it's likely to exist but you can't be sure. Only after it is seen by a get will the put really exist."

The references at the end of the article are also good refreshers on Distributed Systems/Computing.


Emulating PDP-11 Abstract Machine as the Root Cause of Spectre and Meltdown 20 Aug, 2018 Comments Emulating PDP-11 Abstract Machine as the Root Cause of Spectre and Meltdown

The following excerpts from the ACM Queue article caught my attention.

"The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware."

"The quest for high ILP was the direct cause of Spectre and Meltdown."

By the definition that a "low-level language" should be "close to metal", C is not a low-level language anymore. "Close to metal" means the language constructs and memory model should trivially map to processor feature/instruction set. If the language, compiler and processor had first-class support for parallel constructs we would not have needed to encounter Spectre/Meltdown.

The author makes the case for modern languages, compilers and processors to move away from the simplistic flat-memory and sequential-execution legacy that we have inherited from PDP-11. The paradigm shift from CPU to GPU is a case in point -- rather than resorting to speculative execution (ILP), parallelism is supported by the GUP processor and the language exposes the necessary vectorization and parallelization primitives, so programmers actively think and program for parallelism/concurrency.

On a related note, the author David Chisnall, concludes:

"There is a common myth in software development that parallel programming is hard. This would come as a surprise to Alan Kay, who was able to teach an actor-model language to young children, with which they wrote working programs with more than 200 threads. It comes as a surprise to Erlang programmers, who commonly write programs with thousands of parallel components. It's more accurate to say that parallel programming in a language with a C-like abstract machine is difficult, and given the prevalence of parallel hardware, from multicore CPUs to many-core GPUs, that's just another way of saying that C doesn't map to modern hardware very well."


quote: We are systematically creating races out of things that ought to be a journey 02 May, 2013 Comments

We are systematically creating races out of things that ought to be a journey.




     
Original design for Tumblr crafted by Prashanth Kamalakanthan.
Adapted for Tumblr & Jekyll by Sai Charan. Customized theme available on Github.

Sai Charan's blog by Sai Charan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Creative Commons License