This is my first (hopefully of many) series of posts and this is my first post.
In this series I will attempt to explain Lernd - my implementation of ∂ILP framework by (Evans & Grefenstette, 2018) from DeepMind. If you are an AI researcher, you may have heard about it when it came out, but otherwise, I don't think it was the bestest, most important paper that came out from DeepMind. It did, however, catch my attention at the time. I was just beginning to think of going back to academia and thought that maybe the intersection between classic ML (probabilistic / NN-based) and symbolic AI is where it's at. While it's obviously a lengthy and quite technical paper, I found it very interesting. Moreover, I already knew about ILP from some course in university (cannot pinpoint which one exactly...). However, it was complicated enough as to confuse me, while still seeming understandable. So I decided to go ahead and implement it! Now, I'm pretty sure I understand it (and even have ideas for similar things), but to cement my understanding, I will try to explain everything in this series of posts. As they say, you only really know things that you can teach.
Of course, I have to assume some level of skill and understanding in my audience, so I'm imagining my younger self, before me learnt about ILP. So basically, anyone even remotely in the field of CS & AI.
In the next post, I will probably try to explain the abstract and what ILP is.
- Evans, R., & Grefenstette, E. (2018). Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61, 1-64. https://doi.org/10.1613/jair.5714