AI待学习
1、变分自编码器(一):原来是这么一回事
https://spaces.ac.cn/archives/5253
2、python
http://book.pythontips.com/en/latest/index.html
http://interpy.eastlakeside.com/
3、统计学习要素
英文书籍: The Elements of Statistical Learning
1、https://esl.hohoweiya.xyz/01-Introduction/2016-07-26-Chapter-1-Introduction/index.html
2、http://www.loyhome.com/elements_of_statistical_learining_lecture_notes/
4、python进阶
https://blog.csdn.net/qq_41853758/article/details/82853811
5、
伯克利PPT带你丝滑入门机器学习
https://mp.weixin.qq.com/s/o3Gkey3SnYBcZGdOWojDXQ
PPT传送门:
https://csinva.github.io/pres/189/#/
CS 189/289课程笔记:
https://people.eecs.berkeley.edu/~jrs/papers/machlearn.pdf
CS 189/289课程主页:
https://people.eecs.berkeley.edu/~jrs/189/
GitHub传送门:
https://github.com/csinva/csinva.github.io/blob/master/_notes/ref/ml_slides/slides.md
6、文本分类方法综述
https://zhuanlan.zhihu.com/p/29201491
7、深度学习语义相似度
深度学习解决NLP问题:语义相似度计算
“传统的文本相似性如 BM25” BM25跟beamsearch不是一个东西。
DSSM(Deep Structured Semantic Models)
https://www.cnblogs.com/qniguoym/p/7772561.html
Simaese LSTM实现: https://blog.csdn.net/android_ruben/article/details/78427068
8、position embedding
目前了解到有两种方法:
1、从数据中学习到embedding
https://medium.com/@_init_/how-self-attention-with-relative-position-representations-works-28173b8c245a
2、attention is all you need中直接通过sin函数公式计算出position embedding向量。论文中说,跟学出的embedding效果一致。
https://datascience.stackexchange.com/questions/51065/what-is-positional-encoding-in-transformer-model 其中pow(10000,0),0是代表0/4。
http://jalammar.github.io/illustrated-transformer/
https://www.comp.nus.edu.sg/~kanmy/courses/6101_1810/w8-transformer.pdf
http://nlp.seas.harvard.edu/2018/04/03/attention.html