sno bunny

Author: Anon Ymous

#daytradechatter

Tue Feb 18 22:11:25 2020
<21c2e595> u use the liquidity to scale out a bit? <@U03694HFD>

#random

Tue Feb 18 22:01:16 2020
<0e4e4324> <https://www.pbs.org/wgbh/frontline/film/amazon-empire> Airs tonight
— Amazon Empire: The Rise and Reign of Jeff Bezos
— FRONTLINE examines Amazon CEO Jeff Bezos’ ascent to power, and the glob…

#sublevels

Tue Feb 18 20:47:08 2020
transfix: cool =)

#jobs

Tue Feb 18 20:32:37 2020
<773ab1f1> welp I got in touch with the hiring manager on this gig above with a reference from the team lead
<773ab1f1> The game is mine to lose.

#memeland

Tue Feb 18 20:13:51 2020
<8f79fcda>

#machinelearning

Tue Feb 18 19:47:27 2020
<d666283b> <@U4FQ46RGU> you should check out How to Create by Ray Kurzweil
<d666283b> I know a guy who’s built a model to detect copyrighted porn videos

#sublevels

Tue Feb 18 19:33:53 2020
<75f07d61> What’s the purpose of this channel?
<9e126bf3> It’s a bridge to a dc++ hub

#sublevels

Tue Feb 18 19:08:19 2020
resurrection eh

#good_content

Tue Feb 18 18:55:06 2020
<8f79fcda> in the context of economics… this is kinda powerful – saying that ML models and classical statistics diverge greatly depending on what you’re looking at
<8f79fcda> or “where”…

#machinelearning

Tue Feb 18 18:42:01 2020
<773ab1f1> <https://machine-learning-and-security.github.io/papers/mlsec17_paper_51.pdf> &lt;— backdoored neural net.

#good_content

Tue Feb 18 18:39:10 2020 <8f79fcda> <@U03694HFD> this model has fit with the gartner hype cycle … rabbit holing commences — The Map of Mathematics | Quanta Magazine — Explore our surprisingly simple, absurdly ambitious and necessarily incomplete guide to the boundless mathematical universe. <8f79fcda> <@U03694HFD> this model has fit with the gartner hype cycle […]

#good_content

Tue Feb 18 18:23:33 2020
<9e126bf3> <https://arxiv.org/pdf/1812.11118.pdf>

#machinelearning

Tue Feb 18 18:22:52 2020 <9e126bf3> <https://openai.com/blog/deep-double-descent/> — Deep Double Descent — We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior […]

Back to top