Evaluation metrices

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Evaluation metrices

Gokul
I have written some rule in SWRL tab in protege. Now i must evaluation my
rule as well as my concept. For evaluation purpose i must use any external
tool or is there any in build tool available in protege? Will any one guide
me further



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Evaluation metrices

Michael DeBellis-2
Gokul, If by evaluate your rules you mean you want to run your rules against your ontology and see that they do what you expect them to do then you don't need any external tools for that. Use the Pellet reasoner, to the best of my knowledge it has the best support for SWRL right now. Then just run the reasoner as you normally would after any other change to your ontology. When you inspect your ontology you should see that new values have been added to individuals as the result of your rules firing. As with any inferences made by the reasoner these new values will be highlighted in yellow and will have a little "?" icon next to them that you can click on and if you do the explanation will include reference to the rule(s) that fired and caused the specific new inference. 

If you don't see the inferences you expect you can use SQWRL to debug your rules. I think there is also a SWRL debugging tool in one of the current Protege plugins but I've always found SQWRL was enough to debug any rules I had even in some fairly large and complex ontologies. An example of using SQWRL would be if you have a (trivial) rule  like: 

hasCar(?p,?c) -> Driver(?p)

And the rule isn't firing you can write a SQWRL rule where you replace the consequent with a SQWRL select statement:

  hasCar(?p,?c) -> sqwrl:select(?p,?c)

 This will print out every time the rule succeeds and what the values are for p and c when it does. If it prints out no values you know that it's not matching any individuals in your ontology. 

I have an example of using SQWRL to debug a small tutorial ontology in my SWRL tutorial: https://www.michaeldebellis.com/post/swrl_tutorial 

BTW, this is a different page then I used to give out, I've switched over to a new tool for my blog so if you have any problems downloading the files please feel free to email me directly. 

Michael

On Fri, Sep 27, 2019 at 1:41 AM Gokul <[hidden email]> wrote:
I have written some rule in SWRL tab in protege. Now i must evaluation my
rule as well as my concept. For evaluation purpose i must use any external
tool or is there any in build tool available in protege? Will any one guide
me further



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Evaluation metrices

Lorenz Buehmann

According to the topic title, I doubt he wants to simply execute the rules ... but I might be wrong as usual.

But as usual for his messages in the past, no idea what exactly he's asking. You should start learning how to ask good questions. Context, examples, etc. - just more details. Terms like "Evaluation metrices" are non-standard - at least I don't know what is meant here.

I also don't understand this:

Now i must evaluation my
rule as well as my concept.
what means "evaluate my concept" - what is your "concept"? Again, a term like "concept" is totally ambigue, especially in computer science and even more in ontology modeling.


Context, examples, data, input, output etc. - people need more details before they can help.

On 27.09.19 15:31, Michael DeBellis wrote:
Gokul, If by evaluate your rules you mean you want to run your rules against your ontology and see that they do what you expect them to do then you don't need any external tools for that. Use the Pellet reasoner, to the best of my knowledge it has the best support for SWRL right now. Then just run the reasoner as you normally would after any other change to your ontology. When you inspect your ontology you should see that new values have been added to individuals as the result of your rules firing. As with any inferences made by the reasoner these new values will be highlighted in yellow and will have a little "?" icon next to them that you can click on and if you do the explanation will include reference to the rule(s) that fired and caused the specific new inference. 

If you don't see the inferences you expect you can use SQWRL to debug your rules. I think there is also a SWRL debugging tool in one of the current Protege plugins but I've always found SQWRL was enough to debug any rules I had even in some fairly large and complex ontologies. An example of using SQWRL would be if you have a (trivial) rule  like: 

hasCar(?p,?c) -> Driver(?p)

And the rule isn't firing you can write a SQWRL rule where you replace the consequent with a SQWRL select statement:

  hasCar(?p,?c) -> sqwrl:select(?p,?c)

 This will print out every time the rule succeeds and what the values are for p and c when it does. If it prints out no values you know that it's not matching any individuals in your ontology. 

I have an example of using SQWRL to debug a small tutorial ontology in my SWRL tutorial: https://www.michaeldebellis.com/post/swrl_tutorial 

BTW, this is a different page then I used to give out, I've switched over to a new tool for my blog so if you have any problems downloading the files please feel free to email me directly. 

Michael

On Fri, Sep 27, 2019 at 1:41 AM Gokul <[hidden email]> wrote:
I have written some rule in SWRL tab in protege. Now i must evaluation my
rule as well as my concept. For evaluation purpose i must use any external
tool or is there any in build tool available in protege? Will any one guide
me further



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Evaluation metrices

Lorenz Buehmann
Ok, we got now 3 question regarding ontology evaluation in the last few
days.

Is it some homework or university project going on? In can hardly be a
coincidence? If I'm right, it would be nice to know who the supervisor is.



_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Evaluation metrices

Neha Sood
i think it's a coincidence Dr Lorenz.....and why we need to know the supervisor in case of asking something. 🤔

On Sat, Sep 28, 2019 at 12:42 PM Lorenz Buehmann <[hidden email]> wrote:
Ok, we got now 3 question regarding ontology evaluation in the last few
days.

Is it some homework or university project going on? In can hardly be a
coincidence? If I'm right, it would be nice to know who the supervisor is.



_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: Evaluation metrices

Michael DeBellis-2
In reply to this post by Lorenz Buehmann
Lorenz, I'm pretty sure it's a coincidence. I was talking one on one  with Neha about some OWL basics that wouldn't have been interesting to the whole list and she asked me about ontology evaluation and I suggested she ask the list since I don't know much about the topic. 

As for Gokul's SWRL question,  I may have been misunderstanding the question but from the context it seemed to me what he was probably asking  wasn't about evaluating how good the rules are but evaluating in the sense of seeing them run and create results.

I've seen this with other new users, they expect that there is something they have to do to get SWRL to run rules besides just running the reasoner. I think this happens to new users for a number of reasons: First, some of the reasoners don't support the SWRL built-ins so they may be using a reasoner that isn't working on SWRL. Second, they write rules and don't see results because they don't understand how SWRL works or they don't have the appropriate individuals in their ontology for test data or they are misunderstanding negation as failure or the Open World Assumption. 

But I could quite possibly be wrong, hopefully we'll get some more feedback from Gokul to clarify his question. Gokul, as always, if you want more feedback the best way is to include the latest version of your ontology. 

Michael



On Sat, Sep 28, 2019 at 12:12 AM Lorenz Buehmann <[hidden email]> wrote:
Ok, we got now 3 question regarding ontology evaluation in the last few
days.

Is it some homework or university project going on? In can hardly be a
coincidence? If I'm right, it would be nice to know who the supervisor is.



_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user