User Profile

Jacob T.

jacob@knowledgehub.social

Joined 7 months, 3 weeks ago

This link opens in a pop-up window

Sparks (2023, Oxford University Press, Incorporated) 5 stars

Powerful

5 stars

A frank look into the long-term, and ongoing rewriting of history by the CCP, as well as the brave few who manage to continue to document and catalog the past as a time capsule for future generations.

Amazing how one of the largest man-made mass deaths is little known, and almost never studied in Western history. This book opens more questions than it answers, but helpfully comes with a guide for where to go next to learn more.

Not particularly cheery, but there is a glimmer of hope that shines throughout.

HECO: Fully Homomorphic Encryption Compiler (2023, Arxiv) 4 stars

In recent years, Fully Homomorphic Encryption (FHE) has undergone several breakthroughs and advancements, leading to …

An improvement in usability

4 stars

This paper covers a compiler for more traditional imperative code to be converted to optimized (and batched) FHE operations via the SEAL library. The frontend is Python, which is then converted to multiple simplification and optimization passes in the CPP MLIR.

Both synthetic/toy examples and more real-world applications are created in pure imperative implementations (that require non-performant emulation steps), compiled with HECO, and built with FHE optimizations manually. The HECO performance is close to the hand-optimized in most scenarios, even edging it out in a few.

Hopefully the spread of this tool will help FHE reach the masses.

Machine learning has progressed significantly in various applications ranging from face recognition to text generation. …

Less plausible than Adversarial Reprogramming

3 stars

This paper covers a highly-effective (85%+) hijack attack where training data is tainted by an adversary, and then the model can be cajoled into performing other types of tasks. While this work is a steep closer to a more general-type of attack, the model is less plausible than inference-time attacks popularized in the Adversarial Reprogramming literature.

Deep neural networks are susceptible to \emph{adversarial} attacks. In computer vision, well-crafted perturbations to images …

The first "RCE" against ML that I came across

5 stars

I have sent this paper to a number of people over the years from when it first came out, I am surprised there is less attention to this type of attack, despite being a white-box model. This is the first class of attack that lets the attacker reprogram an image classification model to perform an attacker-determined task (e.g., turning an image classifier into a counter task).

Reviewing this paper 5 years after its release, it still stands up, and I see there is a small field of work in this lineage that includes similar attacks against NLP classifiers. I would count this paper as the starting point for this class of attack, which is an impressive and high-impact field.

Short Message Service (SMS) remains one of the most popular communication channels since its introduction …

An improvement over the state-of-the-art with real-world consequences

3 stars

While silent SMSes have been used by authorities for quite some time to geolocate cell-phones, this work puts a less powerful capability into the hands of anyone. By training a ML model on the RTT from sending a silent SMS to phones in different [known] locations, a temporal map of the GSM network can be made to later classify RTTs when targeting a victim phone and approximate their location to country/region.

Without cooperation of the cell infrastructure it's pretty coarse-grained, but still a scary way to figure out where a target of interest is without alerting them.

Machine learning has progressed significantly in various applications ranging from face recognition to text generation. …

Surprising that what is essentially a RCE in an ML model attack has gotten so little attention. Looks like a nice continuation from the image classification attacks.

The enshittification of the internet follows a predictable trajectory: first, platforms are good to their …

Very timely

5 stars

In this talk, @pluralistic@mamot.fr covers the basic premise of enshittification, how the internet giants have lobbied to change the rules that let them get big to stifle competition, and finally, what can be done about it.

I only recently was made aware of the term enshittification, but had seen the decay of online platforms hasten, be it Twitter, FB, Reddit, Google, etc. The term and "play book" was helpful to draw connections between the behaviors on disparate sites.

I am slightly less hopeful than the author about reversing the course on some of these, but I guess there's something to be said about me posting this review on a distributed, federated service not part of big tech.

Overall a great talk, wake up call, and pointer to some hopeful directions.