Ontology mapping ?

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Ontology mapping ?

nika
Hello,

I am a new user and I have a more general approach problem. I do text mining
(via an app) and I discover some structures e.g. person->has->car. I also
have a domain ontology in protégé describing persons and cars. My questions
are:
1) How is it possible for my application to be notified that persons and car
objects belong to (are concepts of) the predefined ontology?
2) How can I ontologically check the structure consistency e.g. check of
there is an invalid car->has->person instance (not allowed in the predefined
ontology)?
Shall I annotate the text results (manually) in order to do this?

Thank you,
Nik Alexiu




--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Ontology mapping ?

Michael DeBellis-2

>How is it possible for my application to be notified that persons and car
>objects belong to (are concepts of) the predefined ontology?  

You need to use some programming library that can leverage ontology objects. The most common programming language to do this is Java so if your application is in Java that will make things easier. The Java library that I think most people use is Apache Jena:  https://jena.apache.org/   but there are others as well. 

There is also a Python library called OWL Ready: https://pypi.org/project/Owlready2/ 

You could also use a SPARQL implementation and connect that with your application. There may be libraries for other languages that I'm not aware of.  

> How can I ontologically check the structure consistency e.g. check of
> there is an invalid car->has->person instance (not allowed 
> in the predefined ontology)?  

First, remember that the primary motivation for ontologies in OWL is to reason about data not to do consistency checking as in traditional database constraints. Unlike standard DBMS which use the Close World Assumption, OWL uses the Open World Assumption which makes some kinds of integrity checking difficult or impossible. However, that specific use case can easily be modeled in OWL and Protege.  Assuming Car and Person are classes and hasCar is an object property you just need to define the domain and range for hasCar. Make the domain Person and the range Car (a Person has a Car but not vice versa). Also, make sure that Person and Car are disjoint classes (or are subclasses of some classes that are disjoint) so that an instance can't be an instance of both Person and Car. Then if your ontology has something that is not a Person (such as a Car) but hasCar the reasoner will trigger an error. 

Note that what the domain definition really does is not to check that everything that hasCar is a Person but rather to assert that anything that hasCar is a Person. This can be very powerful because you can just define some new object and not worry about typing it, the reasoner can often deduce what the type (what it's an instance of) of an object is just by things such as the domain and range restrictions. I.e., if you don't define Person and Car to be disjoint classes then the reasoner won't signal an error if I say that Car1 hasCar Car2; it will just infer that Car1 is a Person and a Car. 

Hope that helps. Let us know if you have further questions.

Michael

On Mon, Apr 15, 2019 at 9:35 AM nika <[hidden email]> wrote:
Hello,

I am a new user and I have a more general approach problem. I do text mining
(via an app) and I discover some structures e.g. person->has->car. I also
have a domain ontology in protégé describing persons and cars. My questions
are:
1)      How is it possible for my application to be notified that persons and car
objects belong to (are concepts of) the predefined ontology?
2)      How can I ontologically check the structure consistency e.g. check of
there is an invalid car->has->person instance (not allowed in the predefined
ontology)?
Shall I annotate the text results (manually) in order to do this?

Thank you,
Nik Alexiu




--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Ontology mapping ?

nika
Thank you for your answer,

In fact I received more information I was hoping for. Yes we are on the same
page as I use Java and SPARQL to match the concepts with the respective
classes.

But for the 2nd part I cannot really imagine how to do this.
Ok, I can Identify that car corresponds to Ontology Class "car", person
corresponds to Ontology Class "person". But then how can I compare this in
protege with the owl fact (definition) person -> has -> car in such a way
that the reasoner will notify me if I have an invalid text sentence "car has
car" ?

Rgds,
Nik.



Michael DeBellis-2 wrote

>>How is it possible for my application to be notified that persons and car
>>objects belong to (are concepts of) the predefined ontology?
>
> You need to use some programming library that can leverage ontology
> objects. The most common programming language to do this is Java so if
> your
> application is in Java that will make things easier. The Java library that
> I think most people use is Apache Jena:  https://jena.apache.org/   but
> there are others as well.
>
> There is also a Python library called OWL Ready:
> https://pypi.org/project/Owlready2/
>
> You could also use a SPARQL implementation and connect that with your
> application. There may be libraries for other languages that I'm not aware
> of.
>
>> How can I ontologically check the structure consistency e.g. check of
>> there is an invalid car->has->person instance (not allowed
>> in the predefined ontology)?
>
> First, remember that the primary motivation for ontologies in OWL is to
> reason about data not to do consistency checking as in traditional
> database
> constraints. Unlike standard DBMS which use the Close World Assumption,
> OWL
> uses the Open World Assumption which makes some kinds of integrity
> checking
> difficult or impossible. However, that specific use case can easily be
> modeled in OWL and Protege.  Assuming Car and Person are classes and
> hasCar
> is an object property you just need to define the domain and range for
> hasCar. Make the domain Person and the range Car (a Person has a Car but
> not vice versa). Also, make sure that Person and Car are disjoint classes
> (or are subclasses of some classes that are disjoint) so that an instance
> can't be an instance of both Person and Car. Then if your ontology has
> something that is not a Person (such as a Car) but hasCar the reasoner
> will
> trigger an error.
>
> Note that what the domain definition really does is not to check that
> everything that hasCar is a Person but rather to assert that anything that
> hasCar is a Person. This can be very powerful because you can just define
> some new object and not worry about typing it, the reasoner can often
> deduce what the type (what it's an instance of) of an object is just by
> things such as the domain and range restrictions. I.e., if you don't
> define
> Person and Car to be disjoint classes then the reasoner won't signal an
> error if I say that Car1 hasCar Car2; it will just infer that Car1 is a
> Person and a Car.
>
> Hope that helps. Let us know if you have further questions.
>
> Michael
>
> On Mon, Apr 15, 2019 at 9:35 AM nika &lt;

> fanatic@

> &gt; wrote:
>
>> Hello,
>>
>> I am a new user and I have a more general approach problem. I do text
>> mining
>> (via an app) and I discover some structures e.g. person->has->car. I also
>> have a domain ontology in protégé describing persons and cars. My
>> questions
>> are:
>> 1)      How is it possible for my application to be notified that persons
>> and car
>> objects belong to (are concepts of) the predefined ontology?
>> 2)      How can I ontologically check the structure consistency e.g.
>> check
>> of
>> there is an invalid car->has->person instance (not allowed in the
>> predefined
>> ontology)?
>> Shall I annotate the text results (manually) in order to do this?
>>
>> Thank you,
>> Nik Alexiu
>>
>>
>>
>>
>> --
>> Sent from:
>> http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
>> _______________________________________________
>> protege-user mailing list
>>

> protege-user@.stanford

>> https://mailman.stanford.edu/mailman/listinfo/protege-user
>>
>
> _______________________________________________
> protege-user mailing list

> protege-user@.stanford

> https://mailman.stanford.edu/mailman/listinfo/protege-user





--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Ontology mapping ?

Michael DeBellis-2
>Ok, I can Identify that car corresponds to Ontology Class "car", person
>corresponds to Ontology Class "person". But then how can I compare this in
>protege with the owl fact (definition) person -> has -> car in such a way
>that the reasoner will notify me if I have an invalid text sentence "car has
>car" ?  

I'm not sure I'm understanding your question. If you are asking how you can make Protege process a text sentence like "Car1 just bought a new Car" or even just "Car2 has Car1" that's beyond the scope of what Protege does. Protege is a tool to design ontologies. To process a text sentence and turn it into the appropriate objects you need some kind of Natural Language Processing (NLP) tool. My guess is that there are some out there that have been built to process text and turn it into objects in an OWL ontology but I don't know of any and it's kind of beyond the scope of Protege support. I tried a Google search and there seem to be some papers on the topic but the ones I found seem rather dated but it's not really my field. You might try posting to the Ontolog Google forum: https://groups.google.com/forum/#!forum/ontolog-forum  I think there are some people in that group that work on NLP. You might also try the page for the NLP group at the Information Sciences Institute: https://www.isi.edu/research_groups/nlg/home  

I think that is what you are looking for but if you still aren't clear on how to model that only a Person hasCar some Car and not vice versa, see the attached small CarExample ontology. There are two classes Car and Person which are declared as disjoint and a property hasCar with domain Person and Range Car. There is also an inverse property hasOwner. Note that by declaring hasOwner to be the inverse of hasCar the domain and range for hasOwner are inferred by the reasoner. 

I created two instances of Car: Car1 and Car2 and then asserted that Car2 hasCar Car1.  When you run the reasoner you will get an error that the ontology is inconsistent.  The error message will tell you that the problem is that Car2 is being forced to belong to the class Car and Person. 

You can also go to the Object Properties tab and you will see that the domain and range for the property hasCar are highlighted in red indicating they are involved in a contradiction. Click on the little "?" icon next to Car in the range for hasCar. That will give you an explanation for why there is an inconsistency. It should look like this: 

CarExample.PNG

 If you remove the assertion that Car2 hasCar Car1 and rerun the reasoner the ontology will be consistent and you will see that Mary hasCar Car1 and also that the reasoner has inferred that Car1 hasOwner Mary. 

That's as much as Protege and OWL can do for you, to process text and turn it into objects and property assertions you will need some NLP tool. 

Michael

On Mon, Apr 15, 2019 at 1:43 PM nika <[hidden email]> wrote:
Thank you for your answer,

In fact I received more information I was hoping for. Yes we are on the same
page as I use Java and SPARQL to match the concepts with the respective
classes.

But for the 2nd part I cannot really imagine how to do this.
Ok, I can Identify that car corresponds to Ontology Class "car", person
corresponds to Ontology Class "person". But then how can I compare this in
protege with the owl fact (definition) person -> has -> car in such a way
that the reasoner will notify me if I have an invalid text sentence "car has
car" ?

Rgds,
Nik.



Michael DeBellis-2 wrote
>>How is it possible for my application to be notified that persons and car
>>objects belong to (are concepts of) the predefined ontology?
>
> You need to use some programming library that can leverage ontology
> objects. The most common programming language to do this is Java so if
> your
> application is in Java that will make things easier. The Java library that
> I think most people use is Apache Jena:  https://jena.apache.org/   but
> there are others as well.
>
> There is also a Python library called OWL Ready:
> https://pypi.org/project/Owlready2/
>
> You could also use a SPARQL implementation and connect that with your
> application. There may be libraries for other languages that I'm not aware
> of.
>
>> How can I ontologically check the structure consistency e.g. check of
>> there is an invalid car->has->person instance (not allowed
>> in the predefined ontology)?
>
> First, remember that the primary motivation for ontologies in OWL is to
> reason about data not to do consistency checking as in traditional
> database
> constraints. Unlike standard DBMS which use the Close World Assumption,
> OWL
> uses the Open World Assumption which makes some kinds of integrity
> checking
> difficult or impossible. However, that specific use case can easily be
> modeled in OWL and Protege.  Assuming Car and Person are classes and
> hasCar
> is an object property you just need to define the domain and range for
> hasCar. Make the domain Person and the range Car (a Person has a Car but
> not vice versa). Also, make sure that Person and Car are disjoint classes
> (or are subclasses of some classes that are disjoint) so that an instance
> can't be an instance of both Person and Car. Then if your ontology has
> something that is not a Person (such as a Car) but hasCar the reasoner
> will
> trigger an error.
>
> Note that what the domain definition really does is not to check that
> everything that hasCar is a Person but rather to assert that anything that
> hasCar is a Person. This can be very powerful because you can just define
> some new object and not worry about typing it, the reasoner can often
> deduce what the type (what it's an instance of) of an object is just by
> things such as the domain and range restrictions. I.e., if you don't
> define
> Person and Car to be disjoint classes then the reasoner won't signal an
> error if I say that Car1 hasCar Car2; it will just infer that Car1 is a
> Person and a Car.
>
> Hope that helps. Let us know if you have further questions.
>
> Michael
>
> On Mon, Apr 15, 2019 at 9:35 AM nika &lt;

> fanatic@

> &gt; wrote:
>
>> Hello,
>>
>> I am a new user and I have a more general approach problem. I do text
>> mining
>> (via an app) and I discover some structures e.g. person->has->car. I also
>> have a domain ontology in protégé describing persons and cars. My
>> questions
>> are:
>> 1)      How is it possible for my application to be notified that persons
>> and car
>> objects belong to (are concepts of) the predefined ontology?
>> 2)      How can I ontologically check the structure consistency e.g.
>> check
>> of
>> there is an invalid car->has->person instance (not allowed in the
>> predefined
>> ontology)?
>> Shall I annotate the text results (manually) in order to do this?
>>
>> Thank you,
>> Nik Alexiu
>>
>>
>>
>>
>> --
>> Sent from:
>> http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
>> _______________________________________________
>> protege-user mailing list
>>

> protege-user@.stanford

>> https://mailman.stanford.edu/mailman/listinfo/protege-user
>>
>
> _______________________________________________
> protege-user mailing list

> protege-user@.stanford

> https://mailman.stanford.edu/mailman/listinfo/protege-user





--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

CarExample.owl (5K) Download Attachment