# Research

# The Newer Ones

**October 13, 2019**

I recently submitted a paper named DISCERN with my research partner and truly one of the greatest professors in our institution. DISCERN is a partitional clustering algorithm which attempts to solve some common problems with the original K-Means and K-Means++. It is no secret that density-based and hierarchical clustering measures have their own specific uses. For instance, no other algorithm comes close to hierarchical clustering when used for text mining, in terms of results. K-Means on the other hand has proven itself to be suitable for almost any problem, unless there is a great deal of noisy data to deal with, when initialized properly. K-Means initialization mainly consists of setting the number of clusters and selecting the initial centroid coordinates. Nevertheless, this initialization can be problematic.

Research

**September 20, 2019**

What do you think of my neural style transfer of my nephew's photo? I used the Mona Lisa as the styling image.

Research

# The Other Ones

**September 18, 2019**

If you are familiar with artificial neural networks, this is going to be interesting. Neural networks are optimized by loss minimization algorithms such as gradient descent and SGD. The one I've used mostly and I think most people like to use by default is the Adam Optimizer. The way these algorithms function is that they attempt to minimize loss which is usually a function. Basically, the network looks at the predicted output of a layer, and the actual target and tries to make the prediction more accurate, by minimizing the distance between them, which is loss. Nevertheless, when it comes to specific uses such as image similarity detection, there's often the implementation of a function called "Triplet Loss".

Research

**August 21, 2019**

Since a few months ago, I shifted my research focus towards unsupervised learning and parameterless clustering. As one might notice, even unsupervised learning algorithms require supervision over their learning process, as there are few to numerous parameters to be optimized. In older clustering algorithms such as K-Means, the number of parameters is always considered an input. Furthermore, while density-based methods such as DBSCAN do not rely on the number of clusters, they do however rely on two other parameters which help them detect outliers and find arbitrary shaped clusters. Therefore, I set out to find a solution while exploring Self-Organizing Maps (SOMs).

Research