Freshman Seminar Fall 17 LC05

Machine Learning

Recently I went to a Hack4Baruch event – they’re a tech-focused club who’ve run a variety of events this semester, but the one I went to was a talk with Sean Reed. Mr. Reed was originally a physicist, but he’s worn a great many hats, and the one he discussed here was his recent forays into machine learning. Before I discuss the talk itself, I just want to say that the whole atmosphere of the event was really chill, which was nice, and there was just the right amount of people there that the room didn’t feel overcrowded, but it didn’t feel too empty either. As for the talk itself – I honestly can’t recall how long it went, which I guess is a testament to how natural it felt. Mr. Reed discussed his recent project, a neural network dedicated to language processing that he’d fed the entire text of Frankenstein – in order to do so, however, he first took an impromptu poll (vote by hands) of what kind of level of knowledge people had about neural networks, and spent a good while going through the concept from bottom to top – starting with the most basic programming frameworks and libraries involved. I appreciated this greatly, because while I’ve got cursory knowledge in programming and in AI, prior to this I was pretty much in the dark about the layer (or, as it turns out, layers) in between there. After the explanation, he showed us a couple interesting examples – his project’s development over generations, and an image-recognition network along with the specific workings of that. There was also a site that had a graphical representation/simulation of a small network, showing how various algorithms or numbers of active neurons/layers affected the outcome and generation of the network. Over all, I was really happy with how the event went, and signed up for their mailing list.

Leave a Reply