# Research

# The Newer Ones

**September 20, 2019**

What do you think of my neural style transfer of my nephew's photo? I used the Mona Lisa as the styling image.

Research

**September 18, 2019**

If you are familiar with artificial neural networks, this is going to be interesting. Neural networks are optimized by loss minimization algorithms such as gradient descent and SGD. The one I've used mostly and I think most people like to use by default is the Adam Optimizer. The way these algorithms function is that they attempt to minimize loss which is usually a function. Basically, the network looks at the predicted output of a layer, and the actual target and tries to make the prediction more accurate, by minimizing the distance between them, which is loss. Nevertheless, when it comes to specific uses such as image similarity detection, there's often the implementation of a function called "Triplet Loss".

Research

# The Other Ones

**August 21, 2019**

Since a few months ago, I shifted my research focus towards unsupervised learning and parameterless clustering. As one might notice, even unsupervised learning algorithms require supervision over their learning process, as there are few to numerous parameters to be optimized. In older clustering algorithms such as K-Means, the number of parameters is always considered an input. Furthermore, while density-based methods such as DBSCAN do not rely on the number of clusters, they do however rely on two other parameters which help them detect outliers and find arbitrary shaped clusters. Therefore, I set out to find a solution while exploring Self-Organizing Maps (SOMs).

Research