The reading of "Facebook is Shiri's Scissor" this morning made my day⁉ Thanks to "Handwaving Freakoutery" for posting it, and to fellow SubStack patron "e.pierce" for referring me to it.👍 Even considering the original date of its publishing, it truly remains a spot-on piece of very relevant satire🎯.
It is likely to be Genereative Adversarial Networks (GAN) and not Invertable Neural Networks (INN), since it operates on the domain of text and not image. GAN is simply a forger-inspector combo ("generators" and "discriminators"). The forger BS articles (with or without prior knowledge of divisive topics). The inspector have to dissern between real and BS articles, AND guess the divisiveness of said article. The errors and discovered patterns are pumped back into the forger for helping in creating more "realistic fakes" ("back propagation"). The two gets in an arms race until the two are evenly matched.
Anyone with some basic machine learning knowledge, and is not high out of their minds, would imeediately smell problems within the "Shiri's Scissors": (a) it is very difficult to be both realistic and divisive, people regulary read satire nowadays and cynicism is in the air (b) Most AI models are time-static, meaning that a model that worked on 2010s data cannot handle 2021 and onward accurately. The humanity landscape changed way too fast. (c) any "live" AI system can be easily thrown off course and is inherently weak, just as any high-frequency stock trader when a flash crash happens.
> humans serving as nodes in the ANN... They didn’t do this on purpose. It’s nobody’s fault... right down to the dopamine cocktail drip... Everybody needs to quit using these things.
Thanks for showing the technique for busting Roko's Basilisk and a myraid other "AI problems", very nice. The problem of "Layer 1" and "Layer 2" is related to the problems previously mentioned. De-stabalizing an AI is the same as trivializing the news. Numb the crowds wtih more "Sam Hyde can't keep get away with it" and other ambiguous memes ("Evasion" or "Adversarial Examples") to poison "Layer 2", and make "X eats their own" happen as the new goal to poison "Layer 1" feedback ("Adversarial Reprogramming"). But how much resources does it take for cynicism to numb the crap through the whole ecosystem?
The reading of "Facebook is Shiri's Scissor" this morning made my day⁉ Thanks to "Handwaving Freakoutery" for posting it, and to fellow SubStack patron "e.pierce" for referring me to it.👍 Even considering the original date of its publishing, it truly remains a spot-on piece of very relevant satire🎯.
As Usual,
EA☠
> I’m not sure the ‘backwards’ thing really works
It is likely to be Genereative Adversarial Networks (GAN) and not Invertable Neural Networks (INN), since it operates on the domain of text and not image. GAN is simply a forger-inspector combo ("generators" and "discriminators"). The forger BS articles (with or without prior knowledge of divisive topics). The inspector have to dissern between real and BS articles, AND guess the divisiveness of said article. The errors and discovered patterns are pumped back into the forger for helping in creating more "realistic fakes" ("back propagation"). The two gets in an arms race until the two are evenly matched.
Anyone with some basic machine learning knowledge, and is not high out of their minds, would imeediately smell problems within the "Shiri's Scissors": (a) it is very difficult to be both realistic and divisive, people regulary read satire nowadays and cynicism is in the air (b) Most AI models are time-static, meaning that a model that worked on 2010s data cannot handle 2021 and onward accurately. The humanity landscape changed way too fast. (c) any "live" AI system can be easily thrown off course and is inherently weak, just as any high-frequency stock trader when a flash crash happens.
> humans serving as nodes in the ANN... They didn’t do this on purpose. It’s nobody’s fault... right down to the dopamine cocktail drip... Everybody needs to quit using these things.
Thanks for showing the technique for busting Roko's Basilisk and a myraid other "AI problems", very nice. The problem of "Layer 1" and "Layer 2" is related to the problems previously mentioned. De-stabalizing an AI is the same as trivializing the news. Numb the crowds wtih more "Sam Hyde can't keep get away with it" and other ambiguous memes ("Evasion" or "Adversarial Examples") to poison "Layer 2", and make "X eats their own" happen as the new goal to poison "Layer 1" feedback ("Adversarial Reprogramming"). But how much resources does it take for cynicism to numb the crap through the whole ecosystem?
The only thing that is truly outrageous is JEM.
Yes, I built my time machine for this.