RULE EVALUATION METRICS

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

RULE EVALUATION METRICS

Gokul
I kindly request some one to clear tell me how evaluate my swrl rule. I noted
in a journal they have used support, Confidence, Lift, Leverage, Conviction
to evaluate swrl rule. What it is about. Can any one explain me . It should
be done with any tool?



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: RULE EVALUATION METRICS

Michael DeBellis-2
Gokul: what do you mean by "evaluate my SWRL rule"? Do you mean to get feedback on how good the SWRL rules are? There was a thread just recently with some pointers to tools that evaluate ontologies. In my personal opinion this is really still a hard research problem because you can't just evaluate an ontology or a set of rules without knowing what the requirements the ontology/rules are trying to satisfy are. You can find common anti-patterns (mistakes that people typically make) but (again this is my opinion others may disagree) that's about it. 

Or do you mean "evaluate" in the sense of running the rules and seeing if they do what you expect them to do and if not why not? That's a different question. To run the rules all you need to do is use the Pellet reasoner and run it.  You also need to make sure you have appropriate data (individuals) in your ontology to test the rules. If you do have appropriate test data and the rules aren't doing what you expect them to do you can use SQWRL to test your rules as I've described before and as is described in my SWRL tutorial. There is also a plugin for Protege called the SRE Plugin, I'm including the description from the Protege plugin page below. I've never used this plugin because I already have a lot of experience with SWRL and with rule based systems in general so I haven't needed it and have always been able to just use SQWRL to debug my rules but for new users this might prove very useful:

# SRE Protege Plugin
This work proposes a prototypical implementation of an debugging algorithm for SWRL rules that we call Single Rule Evaluation (SRE).
For practical application, the code is Java-based and can be used in the Ontology Editor Protégé.

A documentation for the plugin will be available soon. Moreover in near future a Standalone version of the Plugin will be available, too.

**For more information about the underlying concept visit:** http://www.insticc.org/Primoris/Resources/PaperPdf.ashx?idPaper=69241

On Thu, Oct 3, 2019 at 7:52 AM Gokul <[hidden email]> wrote:
I kindly request some one to clear tell me how evaluate my swrl rule. I noted
in a journal they have used support, Confidence, Lift, Leverage, Conviction
to evaluate swrl rule. What it is about. Can any one explain me . It should
be done with any tool?



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: RULE EVALUATION METRICS

Michael DeBellis-2
In reply to this post by Gokul
One other suggestion: if you want feedback on your rules probably the best way is if your ontology isn't too large just attach it in a message to the list. I can take a quick look and see if the rules make sense or if there are any obvious issues with the way you are designing them and others on the list may also have time to look and give you feedback. 

On Thu, Oct 3, 2019 at 7:52 AM Gokul <[hidden email]> wrote:
I kindly request some one to clear tell me how evaluate my swrl rule. I noted
in a journal they have used support, Confidence, Lift, Leverage, Conviction
to evaluate swrl rule. What it is about. Can any one explain me . It should
be done with any tool?



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: RULE EVALUATION METRICS

Martin O'Connor-2
In reply to this post by Gokul

The terms you refer to are used in the association rule learning field [1].

Association rules are produced algorithmically, typically via some machine learning method. 

Such rules are very different from SWRL rules, which are generally written by humans and - given SWRL's and OWL’s basis in description logic - are  logical statements of truth. It makes no sense, for example, to attach confidence to a SWRL rule - the rule should express something that is believed to be true.

Martin


On Oct 3, 2019, at 7:52 AM, Gokul <[hidden email]> wrote:

I kindly request some one to clear tell me how evaluate my swrl rule. I noted
in a journal they have used support, Confidence, Lift, Leverage, Conviction
to evaluate swrl rule. What it is about. Can any one explain me . It should
be done with any tool?



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user


_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user
Reply | Threaded
Open this post in threaded view
|

Re: RULE EVALUATION METRICS

Lorenz Buehmann

Exactly, that was also what I thought when seeing the terms. Rule Mining in general, a subfield of unsupervised machine learning. Indeed at least support and confidence values are also metrics in other learning algorithms.


@Michael: you see, as I said before, in his context "evaluation (of an ontology or rule)" must mean something different. Still, what I assume - he got a task (maybe assignment) to create some rules and evaluate them - without specifying what "evaluation" means in the task.

@Gokul please provide more details. What is the task here? Who gave you the task, your supervisor? If so, you should ask this person for clarification. Evaluation is not a standardized term.

On 03.10.19 19:00, Martin O'Connor wrote:

The terms you refer to are used in the association rule learning field [1].

Association rules are produced algorithmically, typically via some machine learning method. 

Such rules are very different from SWRL rules, which are generally written by humans and - given SWRL's and OWL’s basis in description logic - are  logical statements of truth. It makes no sense, for example, to attach confidence to a SWRL rule - the rule should express something that is believed to be true.

Martin


On Oct 3, 2019, at 7:52 AM, Gokul <[hidden email]> wrote:

I kindly request some one to clear tell me how evaluate my swrl rule. I noted
in a journal they have used support, Confidence, Lift, Leverage, Conviction
to evaluate swrl rule. What it is about. Can any one explain me . It should
be done with any tool?



--
Sent from: http://protege-project.136.n4.nabble.com/Protege-User-f4659818.html
_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user


_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user

_______________________________________________
protege-user mailing list
[hidden email]
https://mailman.stanford.edu/mailman/listinfo/protege-user