University of Bologna, Italy
Since 2015
40+ countries visited
I think that being a software developer is not just a job,
but a cast of mind and knowning how to code is not just a
matter of learning by heart some programming language but
it is the skill to understand and solve problems.
My first approach to coding was at 16, I have studied Computer
Science since High School and I had the opportunity to practice
the most important programming languages like Assembler x86,
C/C++, Java, Visual Basic.
In 2007 I achieved Bachelor in Computer Science and I decided
to become a Web Developer because it is a very dynamic field
with constant innovations.
Actually I work for a small company and I develop web
application for internal business.
I'm proficient in PHP, SQL, HTML, CSS, Javascript and my
favourite libraries are Laravel, JQuery, Bootstrap.
I am skilled in Linux and MySQL too.
I like to experiment new things and I made some interesting
projects with Arduino boards, check them in my blog!
The thesis is placed in a well defined and crucial field in the context
of Semantic Web. Defining an ontology as the medium for sharing
the knowledge between man and machine, ontologies
population is the process by which information is carried
in a comprehensible format by computers.
The application can automatically populate an ontology through
the analysis of web pages contents.
The application is able to consider the entire content
of a web site, following the description of a given domain
within an ontology and a set of rules, it is based on
a dictionary and statistical approaches to determine the content
of each page and extract information, which
will populate the ontology. Moreover it allows to view
collected data and do semantic searches.
Year: 2007
Technologies: java, protege api, owl, xml, sparql
Keywords: semantic web, ontology, xml, rdf, owl, automatic population, protege, search engine, knowledge extraction, information extraction
The purpose of the project is to create an algorithm that
allows to determine the number and position of road signs
in an image and classify them according to their category
(danger, prohibition, obligation and precedence).
The study goes through the analysis of several theorized techniques in Artificial
Vision, among which segmentation (breakdown of colors into components
fundamental), noise reduction with morphological operators, labeling
related components, definition and extraction of discriminating features
and finally the classification based on the results obtained previously.
Year: 2006
Technologies: c++, mfc library
Keywords: artificial vision, optical recognition, traffic sign, image segmentation, morphological operators, connected component labeling, hue saturation intensity (HSI)