#daytradechatter
Tue Feb 18 22:11:25 2020
<21c2e595> u use the liquidity to scale out a bit? <@U03694HFD>
Tue Feb 18 22:11:25 2020
<21c2e595> u use the liquidity to scale out a bit? <@U03694HFD>
Tue Feb 18 22:01:16 2020
<0e4e4324> <https://www.pbs.org/wgbh/frontline/film/amazon-empire> Airs tonight
— Amazon Empire: The Rise and Reign of Jeff Bezos
— FRONTLINE examines Amazon CEO Jeff Bezos’ ascent to power, and the glob…
Tue Feb 18 20:47:08 2020
transfix: cool =)
Tue Feb 18 20:33:28 2020
<9e126bf3> $ETHE up 21% today lmao
Tue Feb 18 20:32:37 2020
<773ab1f1> welp I got in touch with the hiring manager on this gig above with a reference from the team lead
<773ab1f1> The game is mine to lose.
Tue Feb 18 20:16:06 2020
<8f79fcda> Damn
Tue Feb 18 20:13:51 2020
<8f79fcda>
Tue Feb 18 19:47:27 2020
<d666283b> <@U4FQ46RGU> you should check out How to Create by Ray Kurzweil
<d666283b> I know a guy who’s built a model to detect copyrighted porn videos
Tue Feb 18 19:33:53 2020
<75f07d61> What’s the purpose of this channel?
<9e126bf3> It’s a bridge to a dc++ hub
Tue Feb 18 19:08:19 2020
resurrection eh
Tue Feb 18 18:55:06 2020
<8f79fcda> in the context of economics… this is kinda powerful – saying that ML models and classical statistics diverge greatly depending on what you’re looking at
<8f79fcda> or “where”…
Tue Feb 18 18:42:01 2020
<773ab1f1> <https://machine-learning-and-security.github.io/papers/mlsec17_paper_51.pdf> <— backdoored neural net.
Tue Feb 18 18:39:10 2020 <8f79fcda> <@U03694HFD> this model has fit with the gartner hype cycle … rabbit holing commences — The Map of Mathematics | Quanta Magazine — Explore our surprisingly simple, absurdly ambitious and necessarily incomplete guide to the boundless mathematical universe. <8f79fcda> <@U03694HFD> this model has fit with the gartner hype cycle […]
Tue Feb 18 18:23:33 2020
<9e126bf3> <https://arxiv.org/pdf/1812.11118.pdf>
Tue Feb 18 18:22:52 2020 <9e126bf3> <https://openai.com/blog/deep-double-descent/> — Deep Double Descent — We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior […]