[protege-owl] OutOfMemory

classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[protege-owl] OutOfMemory

Congmin min
I am using Protege beta 3.2. I frequently had the OutOfMemeoryError message, if the ontology is a little bigger.

Is there any way I can fix this problem?

Thanks,
Marlon
Reply | Threaded
Open this post in threaded view
|

[protege-owl] AW: OutOfMemory

Rajverma

Yes,

 

  1. You can allocate more of your exisitng memory resources to Pretege by modifying in the protégé.lax file… For example I have changed mine to “lax.nl.java.option.java.heap.size.max=1500000000”… If that is not enough then,
  2. You should increase your physical memory (in terms of hardware)

 

Cheers,

Raj

 


Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Congmin min
Gese
ndet: Montag, 3. Juli 2006 15:49
An: [hidden email]
Betreff: [protege-owl] OutOfMemory

 

I am using Protege beta 3.2. I frequently had the OutOfMemeoryError message, if the ontology is a little bigger.

Is there any way I can fix this problem?

Thanks,
Marlon

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: OutOfMemory

Nikolaj Berntsen
In reply to this post by Congmin min
Congmin min wrote:

> I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
> message, if the ontology is a little bigger.
>
> Is there any way I can fix this problem?

I think its documented somewhere, but here goes:

edit <protege_dir>/Protege.lax
lax.nl.java.option.java.heap.size.max=200000000

200000000 is my latest attempt to find a number that works for me.

Cheers,
/\/

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] AW: Re: OutOfMemory

Rajverma
Well, I think it depends on the size of the ontology that you are dealing with... for example if one tries to classify thesaurus.owl (from NCI-Oncology) which is around 80 MB, even 1.5 GB of RAM is not enough...

Cheers,
Raj
 

-----Ursprüngliche Nachricht-----
Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Nikolaj Berntsen
Gesendet: Montag, 3. Juli 2006 16:11
An: [hidden email]
Betreff: [protege-owl] Re: OutOfMemory

Congmin min wrote:

> I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
> message, if the ontology is a little bigger.
>
> Is there any way I can fix this problem?

I think its documented somewhere, but here goes:

edit <protege_dir>/Protege.lax
lax.nl.java.option.java.heap.size.max=200000000

200000000 is my latest attempt to find a number that works for me.

Cheers,
/\/

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: AW: Re: OutOfMemory

Alan March
I've suffered this problem also, and resolved it on the same lines as
suggested here. Nevertheless, considering that ontologies and automated
reasoning seem to be configuring the best solution to developing and
mantaining terminologies, and that biomedical terminologies tend to be huge,
couldn't this problem be tackled by the developers of protégé so that the
process becomes less dependent on RAM memory? I'm thinking of paging to disk
and similars. Unfortunately, I am not proficient in Java and can offer very
little help. But I've been working extensively with ontologies and find
Protégé to be quite superior to other tools, including commercial tools such
as Semantic Works and the like.

I feel that this RAM memory problem could deterr a more widespread adoption
of Protégé as **the** tool for ontology management. Raj's comment regarding
the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
I didn't know about as I have not as yet reached such sizes. But I
eventually will. I thought that using the database backend would help, but I
just couldn't even use Fact++ or Pellet under such conditions. So I think
there is an issue here: ontologies will grow large. Just look a the sheer
size of Snomed and how it would benefit from some ontological revamping. But
I can't imagine using Protégé if the OutOfMemory problem remains a problem.

> -----Original Message-----
> From: [hidden email]
> [mailto:[hidden email]] On Behalf Of
> Mudunuri, Raj
> Sent: Monday, July 03, 2006 11:53 AM
> To: [hidden email]
> Subject: [protege-owl] AW: Re: OutOfMemory
>
> Well, I think it depends on the size of the ontology that you
> are dealing with... for example if one tries to classify
> thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
> 1.5 GB of RAM is not enough...
>
> Cheers,
> Raj
>  
>
> -----Ursprüngliche Nachricht-----
> Von: [hidden email]
> [mailto:[hidden email]] Im Auftrag
> von Nikolaj Berntsen
> Gesendet: Montag, 3. Juli 2006 16:11
> An: [hidden email]
> Betreff: [protege-owl] Re: OutOfMemory
>
> Congmin min wrote:
>
> > I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
> > message, if the ontology is a little bigger.
> >
> > Is there any way I can fix this problem?
>
> I think its documented somewhere, but here goes:
>
> edit <protege_dir>/Protege.lax
> lax.nl.java.option.java.heap.size.max=200000000
>
> 200000000 is my latest attempt to find a number that works for me.
>
> Cheers,
> /\/
>
> --------------------------------------------------------------
> -----------
> To unsubscribe go to
> http://protege.stanford.edu/community/subscribe.html
>
> --------------------------------------------------------------
> -----------
> To unsubscribe go to
> http://protege.stanford.edu/community/subscribe.html
>

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: AW: Re: OutOfMemory

Tania Tudorache
Alan,

For very large ontologies, like the NCI thesaurus, you should use the
database backend, not the file-based backend.
The file-backend loads the whole ontology in memory, and hence the size
of ontologies in Protege is limited by the maximum amount of memory that
a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix machines).

The database backend loads in memory only the portion of ontology that
the user needs, and uses caching and other mechanisms to optimize the
ontology operations. For this reason, it is not memory intensive. The
database backend is recommened for ontologies over 50K frames.

You can read about the scalability and tuning of Protege on our wiki:
http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning

I'm working with the NCI thesaurus using the database backend with only
100MB heap size.

Tania


Alan March wrote:

>I've suffered this problem also, and resolved it on the same lines as
>suggested here. Nevertheless, considering that ontologies and automated
>reasoning seem to be configuring the best solution to developing and
>mantaining terminologies, and that biomedical terminologies tend to be huge,
>couldn't this problem be tackled by the developers of protégé so that the
>process becomes less dependent on RAM memory? I'm thinking of paging to disk
>and similars. Unfortunately, I am not proficient in Java and can offer very
>little help. But I've been working extensively with ontologies and find
>Protégé to be quite superior to other tools, including commercial tools such
>as Semantic Works and the like.
>
>I feel that this RAM memory problem could deterr a more widespread adoption
>of Protégé as **the** tool for ontology management. Raj's comment regarding
>the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
>I didn't know about as I have not as yet reached such sizes. But I
>eventually will. I thought that using the database backend would help, but I
>just couldn't even use Fact++ or Pellet under such conditions. So I think
>there is an issue here: ontologies will grow large. Just look a the sheer
>size of Snomed and how it would benefit from some ontological revamping. But
>I can't imagine using Protégé if the OutOfMemory problem remains a problem.
>
>  
>
>>-----Original Message-----
>>From: [hidden email]
>>[mailto:[hidden email]] On Behalf Of
>>Mudunuri, Raj
>>Sent: Monday, July 03, 2006 11:53 AM
>>To: [hidden email]
>>Subject: [protege-owl] AW: Re: OutOfMemory
>>
>>Well, I think it depends on the size of the ontology that you
>>are dealing with... for example if one tries to classify
>>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>>1.5 GB of RAM is not enough...
>>
>>Cheers,
>>Raj
>>
>>
>>-----Ursprüngliche Nachricht-----
>>Von: [hidden email]
>>[mailto:[hidden email]] Im Auftrag
>>von Nikolaj Berntsen
>>Gesendet: Montag, 3. Juli 2006 16:11
>>An: [hidden email]
>>Betreff: [protege-owl] Re: OutOfMemory
>>
>>Congmin min wrote:
>>
>>    
>>
>>>I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>message, if the ontology is a little bigger.
>>>
>>>Is there any way I can fix this problem?
>>>      
>>>
>>I think its documented somewhere, but here goes:
>>
>>edit <protege_dir>/Protege.lax
>>lax.nl.java.option.java.heap.size.max=200000000
>>
>>200000000 is my latest attempt to find a number that works for me.
>>
>>Cheers,
>>/\/
>>
>>--------------------------------------------------------------
>>-----------
>>To unsubscribe go to
>>http://protege.stanford.edu/community/subscribe.html
>>
>>--------------------------------------------------------------
>>-----------
>>To unsubscribe go to
>>http://protege.stanford.edu/community/subscribe.html
>>
>>    
>>
>
>-------------------------------------------------------------------------
>To unsubscribe go to http://protege.stanford.edu/community/subscribe.html
>
>
>  
>

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] AW: Re: AW: Re: OutOfMemory

Rajverma
Hi Tania,

You said,

> I'm working with the NCI thesaurus using the database backend with only 100MB heap size.

I'm interested to know whether you are using owl based NCI thesaurus with database backend OR clips based NCI thesaurus with database backend! B'cos I don't know whether there is a way to use large owl ontologies with a 'database backend'!! If yes, could you point to some related sources...

Cheers,
Raj



-----Ursprüngliche Nachricht-----
Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
Gesendet: Dienstag, 4. Juli 2006 02:31
An: [hidden email]
Betreff: [protege-owl] Re: AW: Re: OutOfMemory

Alan,

For very large ontologies, like the NCI thesaurus, you should use the
database backend, not the file-based backend.
The file-backend loads the whole ontology in memory, and hence the size
of ontologies in Protege is limited by the maximum amount of memory that
a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix machines).

The database backend loads in memory only the portion of ontology that
the user needs, and uses caching and other mechanisms to optimize the
ontology operations. For this reason, it is not memory intensive. The
database backend is recommened for ontologies over 50K frames.

You can read about the scalability and tuning of Protege on our wiki:
http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning

I'm working with the NCI thesaurus using the database backend with only
100MB heap size.

Tania


Alan March wrote:

>I've suffered this problem also, and resolved it on the same lines as
>suggested here. Nevertheless, considering that ontologies and automated
>reasoning seem to be configuring the best solution to developing and
>mantaining terminologies, and that biomedical terminologies tend to be huge,
>couldn't this problem be tackled by the developers of protégé so that the
>process becomes less dependent on RAM memory? I'm thinking of paging to disk
>and similars. Unfortunately, I am not proficient in Java and can offer very
>little help. But I've been working extensively with ontologies and find
>Protégé to be quite superior to other tools, including commercial tools such
>as Semantic Works and the like.
>
>I feel that this RAM memory problem could deterr a more widespread adoption
>of Protégé as **the** tool for ontology management. Raj's comment regarding
>the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
>I didn't know about as I have not as yet reached such sizes. But I
>eventually will. I thought that using the database backend would help, but I
>just couldn't even use Fact++ or Pellet under such conditions. So I think
>there is an issue here: ontologies will grow large. Just look a the sheer
>size of Snomed and how it would benefit from some ontological revamping. But
>I can't imagine using Protégé if the OutOfMemory problem remains a problem.
>
>  
>
>>-----Original Message-----
>>From: [hidden email]
>>[mailto:[hidden email]] On Behalf Of
>>Mudunuri, Raj
>>Sent: Monday, July 03, 2006 11:53 AM
>>To: [hidden email]
>>Subject: [protege-owl] AW: Re: OutOfMemory
>>
>>Well, I think it depends on the size of the ontology that you
>>are dealing with... for example if one tries to classify
>>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>>1.5 GB of RAM is not enough...
>>
>>Cheers,
>>Raj
>>
>>
>>-----Ursprüngliche Nachricht-----
>>Von: [hidden email]
>>[mailto:[hidden email]] Im Auftrag
>>von Nikolaj Berntsen
>>Gesendet: Montag, 3. Juli 2006 16:11
>>An: [hidden email]
>>Betreff: [protege-owl] Re: OutOfMemory
>>
>>Congmin min wrote:
>>
>>    
>>
>>>I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>message, if the ontology is a little bigger.
>>>
>>>Is there any way I can fix this problem?
>>>      
>>>
>>I think its documented somewhere, but here goes:
>>
>>edit <protege_dir>/Protege.lax
>>lax.nl.java.option.java.heap.size.max=200000000
>>
>>200000000 is my latest attempt to find a number that works for me.
>>
>>Cheers,
>>/\

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Rules in OWL

sima soltani
Hello,
 
I start to work with protege-OWL for my M.Sc. Thesis. I don't know how to write rules in Protege -OWl.
 
Best regards
Sima

"Mudunuri, Raj" <[hidden email]> wrote:
Hi Tania,

You said,

> I'm working with the NCI thesaurus using the database backend with only 100MB heap size.

I'm interested to know whether you are using owl based NCI thesaurus with database backend OR clips based NCI thesaurus with database backend! B'cos I don't know whether there is a way to use large owl ontologies with a 'database backend'!! If yes, could you point to some related sources...

Cheers,
Raj



-----Ursprüngliche Nachricht-----
Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
Gesendet: Dienstag, 4. Juli 2006 02:31
An: [hidden email]
Betreff: [protege-owl] Re: AW: Re: OutOfMemory

Alan,

For very large ontologies, like the NCI thesaurus, you should use the
database backend, not the file-based backend.
The file-backend loads the whole ontology in memory, and hence the size
of ontologies in Protege is limited by the maximum amount of memory that
a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix machines).

The database backend loads in memory only the portion of ontology that
the user needs, and uses caching and other mechanisms to optimize the
ontology operations. For this reason, it is not memory intensive. The
database backend is recommened for ontologies over 50K frames.

You can read about the scalability and tuning of Protege on our wiki:
http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning

I'm working with the NCI thesaurus using the database backend with only
100MB heap size.

Tania


Alan March wrote:

>I've suffered this problem also, and resolved it on the same lines as
>suggested here. Nevertheless, considering that ontologies and automated
>reasoning seem to be configuring the best solution to developing and
>mantaining terminologies, and that biomedical terminologies tend to be huge,
>couldn't this problem be tackled by the developers of protégé so that the
>process becomes less dependent on RAM memory? I'm thinking of paging to disk
>and similars. Unfortunately, I am not proficient in Java and can offer very
>little help. But I've been working extensively with ontologies and find
>Protégé to be quite superior to other tools, including commercial tools such
>as Semantic Works and the like.
>
>I feel that this RAM memory problem could deterr a more widespread adoption
>of Protégé as **the** tool for ontology management. Raj's comment regarding
>the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
>I didn't know about as I have not as yet reached such sizes. But I
>eventually will. I thought that using the database backend would help, but I
>just couldn't even use Fact++ or Pellet under such conditions. So I think
>there is an issue here: ontologies will grow large. Just look a the sheer
>size of Snomed and how it would benefit from some ontological revamping. But
>I can't imagine using Protégé if the OutOfMemory problem remains a problem.
>
>
>
>>-----Original Message-----
>>From: [hidden email]
>>[mailto:[hidden email]] On Behalf Of
>>Mudunuri, Raj
>>Sent: Monday, July 03, 2006 11:53 AM
>>To: [hidden email]
>>Subject: [protege-owl] AW: Re: OutOfMemory
>>
>>Well, I think it depends on the size of the ontology that you
>>are dealing with... for example if one tries to classify
>>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>>1.5 GB of RAM is not enough...
>>
>>Cheers,
>>Raj
>>
>>
>>-----Ursprüngliche Nachricht-----
>>Von: [hidden email]
>>[mailto:[hidden email]] Im Auftrag
>>von Nikolaj Berntsen
>>Gesendet: Montag, 3. Juli 2006 16:11
>>An: [hidden email]
>>Betreff: [protege-owl] Re: OutOfMemory
>>
>>Congmin min wrote:
>>
>>
>>
>>>I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>message, if the ontology is a little bigger.
>>>
>>>Is there any way I can fix this problem?
>>>
>>>
>>I think its documented somewhere, but here goes:
>>
>>edit /Protege.lax
>>lax.nl.java.option.java.heap.size.max=200000000
>>
>>200000000 is my latest attempt to find a number that works for me.
>>
>>Cheers,
>>/\

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html



Do you Yahoo!?
Get on board. You're invited to try the new Yahoo! Mail Beta.
Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: AW: Re: AW: Re: OutOfMemory/ database backend

Julia Dmitrieva
In reply to this post by Rajverma
Hello,
Would you like to explane mw how to use DATABASE BACKEND
in Protege (also in PROTEGE OWL API)
I kan not find documentation.

Thanks in advance,
Julia

Mudunuri, Raj wrote:

> Hi Tania,
>
> You said,
>
>
>>I'm working with the NCI thesaurus using the database backend with only 100MB heap size.
>
>
> I'm interested to know whether you are using owl based NCI thesaurus with database backend OR clips based NCI thesaurus with database backend! B'cos I don't know whether there is a way to use large owl ontologies with a 'database backend'!! If yes, could you point to some related sources...
>
> Cheers,
> Raj
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
> Gesendet: Dienstag, 4. Juli 2006 02:31
> An: [hidden email]
> Betreff: [protege-owl] Re: AW: Re: OutOfMemory
>
> Alan,
>
> For very large ontologies, like the NCI thesaurus, you should use the
> database backend, not the file-based backend.
> The file-backend loads the whole ontology in memory, and hence the size
> of ontologies in Protege is limited by the maximum amount of memory that
> a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix machines).
>
> The database backend loads in memory only the portion of ontology that
> the user needs, and uses caching and other mechanisms to optimize the
> ontology operations. For this reason, it is not memory intensive. The
> database backend is recommened for ontologies over 50K frames.
>
> You can read about the scalability and tuning of Protege on our wiki:
> http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning
>
> I'm working with the NCI thesaurus using the database backend with only
> 100MB heap size.
>
> Tania
>
>
> Alan March wrote:
>
>
>>I've suffered this problem also, and resolved it on the same lines as
>>suggested here. Nevertheless, considering that ontologies and automated
>>reasoning seem to be configuring the best solution to developing and
>>mantaining terminologies, and that biomedical terminologies tend to be huge,
>>couldn't this problem be tackled by the developers of protégé so that the
>>process becomes less dependent on RAM memory? I'm thinking of paging to disk
>>and similars. Unfortunately, I am not proficient in Java and can offer very
>>little help. But I've been working extensively with ontologies and find
>>Protégé to be quite superior to other tools, including commercial tools such
>>as Semantic Works and the like.
>>
>>I feel that this RAM memory problem could deterr a more widespread adoption
>>of Protégé as **the** tool for ontology management. Raj's comment regarding
>>the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
>>I didn't know about as I have not as yet reached such sizes. But I
>>eventually will. I thought that using the database backend would help, but I
>>just couldn't even use Fact++ or Pellet under such conditions. So I think
>>there is an issue here: ontologies will grow large. Just look a the sheer
>>size of Snomed and how it would benefit from some ontological revamping. But
>>I can't imagine using Protégé if the OutOfMemory problem remains a problem.
>>
>>
>>
>>
>>>-----Original Message-----
>>>From: [hidden email]
>>>[mailto:[hidden email]] On Behalf Of
>>>Mudunuri, Raj
>>>Sent: Monday, July 03, 2006 11:53 AM
>>>To: [hidden email]
>>>Subject: [protege-owl] AW: Re: OutOfMemory
>>>
>>>Well, I think it depends on the size of the ontology that you
>>>are dealing with... for example if one tries to classify
>>>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>>>1.5 GB of RAM is not enough...
>>>
>>>Cheers,
>>>Raj
>>>
>>>
>>>-----Ursprüngliche Nachricht-----
>>>Von: [hidden email]
>>>[mailto:[hidden email]] Im Auftrag
>>>von Nikolaj Berntsen
>>>Gesendet: Montag, 3. Juli 2006 16:11
>>>An: [hidden email]
>>>Betreff: [protege-owl] Re: OutOfMemory
>>>
>>>Congmin min wrote:
>>>
>>>  
>>>
>>>
>>>>I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>>message, if the ontology is a little bigger.
>>>>
>>>>Is there any way I can fix this problem?
>>>>    
>>>>
>>>
>>>I think its documented somewhere, but here goes:
>>>
>>>edit <protege_dir>/Protege.lax
>>>lax.nl.java.option.java.heap.size.max=200000000
>>>
>>>200000000 is my latest attempt to find a number that works for me.
>>>
>>>Cheers,
>>>/\
>
>
> -------------------------------------------------------------------------
> To unsubscribe go to http://protege.stanford.edu/community/subscribe.html


--
Julia Dmitrieva

LIACS Office:  124
Phone:  +31 (0)71 – 5275777
E-Mail:  [hidden email]

Member of:  Imaging
Scientific Personnel

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: AW: Re: AW: Re: OutOfMemory

Tania Tudorache
In reply to this post by Rajverma
Raj,

I am using the NCI OWL ontology with the OWL/RDF database backend. And
believe me, it is working :)

The first time, you will have to convert the OWL file-based ontology
into the OWL/RDF database format, and this operation takes some time,
but this is only a one time step. After that you can use the NCI
Thesaurus with only 100MB heap size and it also loads pretty fast (less
then 1 minute). The GUI operation in database mode is, of course, not as
fast as in file-based mode.

Tania



Mudunuri, Raj wrote:

>Hi Tania,
>
>You said,
>
>  
>
>>I'm working with the NCI thesaurus using the database backend with only 100MB heap size.
>>    
>>
>
>I'm interested to know whether you are using owl based NCI thesaurus with database backend OR clips based NCI thesaurus with database backend! B'cos I don't know whether there is a way to use large owl ontologies with a 'database backend'!! If yes, could you point to some related sources...
>
>Cheers,
>Raj
>
>
>
>-----Ursprüngliche Nachricht-----
>Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
>Gesendet: Dienstag, 4. Juli 2006 02:31
>An: [hidden email]
>Betreff: [protege-owl] Re: AW: Re: OutOfMemory
>
>Alan,
>
>For very large ontologies, like the NCI thesaurus, you should use the
>database backend, not the file-based backend.
>The file-backend loads the whole ontology in memory, and hence the size
>of ontologies in Protege is limited by the maximum amount of memory that
>a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix machines).
>
>The database backend loads in memory only the portion of ontology that
>the user needs, and uses caching and other mechanisms to optimize the
>ontology operations. For this reason, it is not memory intensive. The
>database backend is recommened for ontologies over 50K frames.
>
>You can read about the scalability and tuning of Protege on our wiki:
>http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning
>
>I'm working with the NCI thesaurus using the database backend with only
>100MB heap size.
>
>Tania
>
>
>Alan March wrote:
>
>  
>
>>I've suffered this problem also, and resolved it on the same lines as
>>suggested here. Nevertheless, considering that ontologies and automated
>>reasoning seem to be configuring the best solution to developing and
>>mantaining terminologies, and that biomedical terminologies tend to be huge,
>>couldn't this problem be tackled by the developers of protégé so that the
>>process becomes less dependent on RAM memory? I'm thinking of paging to disk
>>and similars. Unfortunately, I am not proficient in Java and can offer very
>>little help. But I've been working extensively with ontologies and find
>>Protégé to be quite superior to other tools, including commercial tools such
>>as Semantic Works and the like.
>>
>>I feel that this RAM memory problem could deterr a more widespread adoption
>>of Protégé as **the** tool for ontology management. Raj's comment regarding
>>the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
>>I didn't know about as I have not as yet reached such sizes. But I
>>eventually will. I thought that using the database backend would help, but I
>>just couldn't even use Fact++ or Pellet under such conditions. So I think
>>there is an issue here: ontologies will grow large. Just look a the sheer
>>size of Snomed and how it would benefit from some ontological revamping. But
>>I can't imagine using Protégé if the OutOfMemory problem remains a problem.
>>
>>
>>
>>    
>>
>>>-----Original Message-----
>>>From: [hidden email]
>>>[mailto:[hidden email]] On Behalf Of
>>>Mudunuri, Raj
>>>Sent: Monday, July 03, 2006 11:53 AM
>>>To: [hidden email]
>>>Subject: [protege-owl] AW: Re: OutOfMemory
>>>
>>>Well, I think it depends on the size of the ontology that you
>>>are dealing with... for example if one tries to classify
>>>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>>>1.5 GB of RAM is not enough...
>>>
>>>Cheers,
>>>Raj
>>>
>>>
>>>-----Ursprüngliche Nachricht-----
>>>Von: [hidden email]
>>>[mailto:[hidden email]] Im Auftrag
>>>von Nikolaj Berntsen
>>>Gesendet: Montag, 3. Juli 2006 16:11
>>>An: [hidden email]
>>>Betreff: [protege-owl] Re: OutOfMemory
>>>
>>>Congmin min wrote:
>>>
>>>  
>>>
>>>      
>>>
>>>>I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>>message, if the ontology is a little bigger.
>>>>
>>>>Is there any way I can fix this problem?
>>>>    
>>>>
>>>>        
>>>>
>>>I think its documented somewhere, but here goes:
>>>
>>>edit <protege_dir>/Protege.lax
>>>lax.nl.java.option.java.heap.size.max=200000000
>>>
>>>200000000 is my latest attempt to find a number that works for me.
>>>
>>>Cheers,
>>>/\
>>>      
>>>
>
>-------------------------------------------------------------------------
>To unsubscribe go to http://protege.stanford.edu/community/subscribe.html
>
>
>  
>

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: Rules in OWL

Tania Tudorache
In reply to this post by sima soltani
Sima,

you will find good documentation on how to write and execute SWRL rules
in Protege here: http://protege.cim3.net/cgi-bin/wiki.pl?SWRLTab

Tania

sima soltani wrote:

> Hello,
>  
> I start to work with protege-OWL for my M.Sc. Thesis. I don't know how
> to write rules in Protege -OWl.
>  
> Best regards
> Sima
>
> */"Mudunuri, Raj" <[hidden email]>/* wrote:
>
>     Hi Tania,
>
>     You said,
>
>     > I'm working with the NCI thesaurus using the database backend
>     with only 100MB heap size.
>
>     I'm interested to know whether you are using owl based NCI
>     thesaurus with database backend OR clips based NCI thesaurus with
>     database backend! B'cos I don't know whether there is a way to use
>     large owl ontologies with a 'database backend'!! If yes, could you
>     point to some related sources...
>
>     Cheers,
>     Raj
>
>
>
>     -----Ursprüngliche Nachricht-----
>     Von: [hidden email]
>     [mailto:[hidden email]] Im Auftrag von
>     Tania Tudorache
>     Gesendet: Dienstag, 4. Juli 2006 02:31
>     An: [hidden email]
>     Betreff: [protege-owl] Re: AW: Re: OutOfMemory
>
>     Alan,
>
>     For very large ontologies, like the NCI thesaurus, you should use the
>     database backend, not the file-based backend.
>     The file-backend loads the whole ontology in memory, and hence the
>     size
>     of ontologies in Protege is limited by the maximum amount of
>     memory that
>     a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix
>     machines).
>
>     The database backend loads in memory only the portion of ontology
>     that
>     the user needs, and uses caching and other mechanisms to optimize the
>     ontology operations. For this reason, it is not memory intensive. The
>     database backend is recommened for ontologies over 50K frames.
>
>     You can read about the scalability and tuning of Protege on our wiki:
>     http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning
>
>     I'm working with the NCI thesaurus using the database backend with
>     only
>     100MB heap size.
>
>     Tania
>
>
>     Alan March wrote:
>
>     >I've suffered this problem also, and resolved it on the same lines as
>     >suggested here. Nevertheless, considering that ontologies and
>     automated
>     >reasoning seem to be configuring the best solution to developing and
>     >mantaining terminologies, and that biomedical terminologies tend
>     to be huge,
>     >couldn't this problem be tackled by the developers of protégé so
>     that the
>     >process becomes less dependent on RAM memory? I'm thinking of
>     paging to disk
>     >and similars. Unfortunately, I am not proficient in Java and can
>     offer very
>     >little help. But I've been working extensively with ontologies
>     and find
>     >Protégé to be quite superior to other tools, including commercial
>     tools such
>     >as Semantic Works and the like.
>     >
>     >I feel that this RAM memory problem could deterr a more
>     widespread adoption
>     >of Protégé as **the** tool for ontology management. Raj's comment
>     regarding
>     >the problem of Protégé failing at 80 megs files with 1.5 GB RAM
>     is something
>     >I didn't know about as I have not as yet reached such sizes. But I
>     >eventually will. I thought that using the database backend would
>     help, but I
>     >just couldn't even use Fact++ or Pellet under such conditions. So
>     I think
>     >there is an issue here: ontologies will grow large. Just look a
>     the sheer
>     >size of Snomed and how it would benefit from some ontological
>     revamping. But
>     >I can't imagine using Protégé if the OutOfMemory problem remains
>     a problem.
>     >
>     >
>     >
>     >>-----Original Message-----
>     >>From: [hidden email]
>     >>[mailto:[hidden email]] On Behalf Of
>     >>Mudunuri, Raj
>     >>Sent: Monday, July 03, 2006 11:53 AM
>     >>To: [hidden email]
>     >>Subject: [protege-owl] AW: Re: OutOfMemory
>     >>
>     >>Well, I think it depends on the size of the ontology that you
>     >>are dealing with... for example if one tries to classify
>     >>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>     >>1.5 GB of RAM is not enough...
>     >>
>     >>Cheers,
>     >>Raj
>     >>
>     >>
>     >>-----Ursprüngliche Nachricht-----
>     >>Von: [hidden email]
>     >>[mailto:[hidden email]] Im Auftrag
>     >>von Nikolaj Berntsen
>     >>Gesendet: Montag, 3. Juli 2006 16:11
>     >>An: [hidden email]
>     >>Betreff: [protege-owl] Re: OutOfMemory
>     >>
>     >>Congmin min wrote:
>     >>
>     >>
>     >>
>     >>>I am using Protege beta 3.2. I frequently had the
>     OutOfMemeoryError
>     >>>message, if the ontology is a little bigger.
>     >>>
>     >>>Is there any way I can fix this problem?
>     >>>
>     >>>
>     >>I think its documented somewhere, but here goes:
>     >>
>     >>edit /Protege.lax
>     >>lax.nl.java.option.java.heap.size.max=200000000
>     >>
>     >>200000000 is my latest attempt to find a number that works for me.
>     >>
>     >>Cheers,
>     >>/\
>
>     -------------------------------------------------------------------------
>     To unsubscribe go to
>     http://protege.stanford.edu/community/subscribe.html
>
>
> ------------------------------------------------------------------------
> Do you Yahoo!?
> Get on board. You're invited
> <http://us.rd.yahoo.com/evt=40791/*http://advision.webevents.yahoo.com/handraisers>
> to try the new Yahoo! Mail Beta.


-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: AW: Re: AW: Re: OutOfMemory

Julia Dmitrieva
In reply to this post by Tania Tudorache
Hello Tania,

could you please explane me (or sent me to some tutorial...)how to
convert thesaurus.owl in database format.
I know how to load the database, but I have no idea about
the structure of the database.
Thus I have to create table with the format that Protege can read.

With best regards,
Julia


Tania Tudorache wrote:

> Raj,
>
> I am using the NCI OWL ontology with the OWL/RDF database backend. And
> believe me, it is working :)
>
> The first time, you will have to convert the OWL file-based ontology
> into the OWL/RDF database format, and this operation takes some time,
> but this is only a one time step. After that you can use the NCI
> Thesaurus with only 100MB heap size and it also loads pretty fast (less
> then 1 minute). The GUI operation in database mode is, of course, not as
> fast as in file-based mode.
>
> Tania
>
>
>
> Mudunuri, Raj wrote:
>
>> Hi Tania,
>>
>> You said,
>>
>>  
>>
>>> I'm working with the NCI thesaurus using the database backend with
>>> only 100MB heap size.
>>>  
>>
>>
>> I'm interested to know whether you are using owl based NCI thesaurus
>> with database backend OR clips based NCI thesaurus with database
>> backend! B'cos I don't know whether there is a way to use large owl
>> ontologies with a 'database backend'!! If yes, could you point to some
>> related sources...
>>
>> Cheers,
>> Raj
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: [hidden email]
>> [mailto:[hidden email]] Im Auftrag von Tania
>> Tudorache
>> Gesendet: Dienstag, 4. Juli 2006 02:31
>> An: [hidden email]
>> Betreff: [protege-owl] Re: AW: Re: OutOfMemory
>>
>> Alan,
>>
>> For very large ontologies, like the NCI thesaurus, you should use the
>> database backend, not the file-based backend.
>> The file-backend loads the whole ontology in memory, and hence the
>> size of ontologies in Protege is limited by the maximum amount of
>> memory that a Java VM can use (1.6 GB on Windows XP and 2 GB on most
>> unix machines).
>>
>> The database backend loads in memory only the portion of ontology that
>> the user needs, and uses caching and other mechanisms to optimize the
>> ontology operations. For this reason, it is not memory intensive. The
>> database backend is recommened for ontologies over 50K frames.
>>
>> You can read about the scalability and tuning of Protege on our wiki:
>> http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning
>>
>> I'm working with the NCI thesaurus using the database backend with
>> only 100MB heap size.
>>
>> Tania
>>
>>
>> Alan March wrote:
>>
>>  
>>
>>> I've suffered this problem also, and resolved it on the same lines as
>>> suggested here. Nevertheless, considering that ontologies and automated
>>> reasoning seem to be configuring the best solution to developing and
>>> mantaining terminologies, and that biomedical terminologies tend to
>>> be huge,
>>> couldn't this problem be tackled by the developers of protégé so that
>>> the
>>> process becomes less dependent on RAM memory? I'm thinking of paging
>>> to disk
>>> and similars. Unfortunately, I am not proficient in Java and can
>>> offer very
>>> little help. But I've been working extensively with ontologies and find
>>> Protégé to be quite superior to other tools, including commercial
>>> tools such
>>> as Semantic Works and the like.
>>>
>>> I feel that this RAM memory problem could deterr a more widespread
>>> adoption
>>> of Protégé as **the** tool for ontology management. Raj's comment
>>> regarding
>>> the problem of Protégé failing at 80 megs files with 1.5 GB RAM is
>>> something
>>> I didn't know about as I have not as yet reached such sizes. But I
>>> eventually will. I thought that using the database backend would
>>> help, but I
>>> just couldn't even use Fact++ or Pellet under such conditions. So I
>>> think
>>> there is an issue here: ontologies will grow large. Just look a the
>>> sheer
>>> size of Snomed and how it would benefit from some ontological
>>> revamping. But
>>> I can't imagine using Protégé if the OutOfMemory problem remains a
>>> problem.
>>>
>>>
>>>
>>>  
>>>
>>>> -----Original Message-----
>>>> From: [hidden email]
>>>> [mailto:[hidden email]] On Behalf Of
>>>> Mudunuri, Raj
>>>> Sent: Monday, July 03, 2006 11:53 AM
>>>> To: [hidden email]
>>>> Subject: [protege-owl] AW: Re: OutOfMemory
>>>>
>>>> Well, I think it depends on the size of the ontology that you are
>>>> dealing with... for example if one tries to classify thesaurus.owl
>>>> (from NCI-Oncology) which is around 80 MB, even 1.5 GB of RAM is not
>>>> enough...
>>>>
>>>> Cheers,
>>>> Raj
>>>>
>>>>
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: [hidden email]
>>>> [mailto:[hidden email]] Im Auftrag von
>>>> Nikolaj Berntsen
>>>> Gesendet: Montag, 3. Juli 2006 16:11
>>>> An: [hidden email]
>>>> Betreff: [protege-owl] Re: OutOfMemory
>>>>
>>>> Congmin min wrote:
>>>>
>>>>  
>>>>    
>>>>
>>>>> I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>>> message, if the ontology is a little bigger.
>>>>>
>>>>> Is there any way I can fix this problem?
>>>>>    
>>>>>      
>>>>
>>>> I think its documented somewhere, but here goes:
>>>>
>>>> edit <protege_dir>/Protege.lax
>>>> lax.nl.java.option.java.heap.size.max=200000000
>>>>
>>>> 200000000 is my latest attempt to find a number that works for me.
>>>>
>>>> Cheers,
>>>> /\
>>>>    
>>
>>
>> -------------------------------------------------------------------------
>> To unsubscribe go to http://protege.stanford.edu/community/subscribe.html
>>
>>
>>  
>>
>
> -------------------------------------------------------------------------
> To unsubscribe go to http://protege.stanford.edu/community/subscribe.html


--
Julia Dmitrieva

LIACS Office:  124
Phone:  +31 (0)71 – 5275777
E-Mail:  [hidden email]

Member of:  Imaging
Scientific Personnel

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: AW: Re: AW: Re: OutOfMemory

Tania Tudorache
Julia,

First you need to load file-based project into Protege (for this you
need, I think, more than 500MB of heap size),  then you select from the
File menu -> Convert Project to format -> OWL/RDF database and follow
the wizard steps.

This is also documented here (with older screenshots):
http://protege.stanford.edu/doc/users_guide/projects/saving_a_database_project.html

You need also a database engine (for example, MSAccess, or MySQL, etc.).
Don't forget to copy the drivers into the Protege directory (as
described in the link from above).

Save the project under a different name than the file based project and
the next time open the pprj file that you have saved and you will be
working with the database, rather than the file based ontology.

Tania



Julia Dmitrieva wrote:

> Hello Tania,
>
> could you please explane me (or sent me to some tutorial...)how to
> convert thesaurus.owl in database format.
> I know how to load the database, but I have ndea about
> the structure of the database.
> Thus I have to create table with the format that Protege can read.
>
> With best regards,
> Julia
>
>
> Tania Tudorache wrote:
>
>> Raj,
>>
>> I am using the NCI OWL ontology with the OWL/RDF database backend.
>> And believe me, it is working :)
>>
>> The first time, you will have to convert the OWL file-based ontology
>> into the OWL/RDF database format, and this operation takes some time,
>> but this is only a one time step. After that you can use the NCI
>> Thesaurus with only 100MB heap size and it also loads pretty fast
>> (less then 1 minute). The GUI operation in database mode is, of
>> course, not as fast as in file-based mode.
>>
>> Tania
>>
>>
>>
>> Mudunuri, Raj wrote:
>>
>>> Hi Tania,
>>>
>>> You said,
>>>
>>>  
>>>
>>>> I'm working with the NCI thesaurus using the database backend with
>>>> only 100MB heap size.
>>>>  
>>>
>>>
>>>
>>> I'm interested to know whether you are using owl based NCI thesaurus
>>> with database backend OR clips based NCI thesaurus with database
>>> backend! B'cos I don't know whether there is a way to use large owl
>>> ontologies with a 'database backend'!! If yes, could you point to
>>> some related sources...
>>>
>>> Cheers,
>>> Raj
>>>
>>>
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: [hidden email]
>>> [mailto:[hidden email]] Im Auftrag von Tania
>>> Tudorache
>>> Gesendet: Dienstag, 4. Juli 2006 02:31
>>> An: [hidden email]
>>> Betreff: [protege-owl] Re: AW: Re: OutOfMemory
>>>
>>> Alan,
>>>
>>> For very large ontologies, like the NCI thesaurus, you should use
>>> the database backend, not the file-based backend.
>>> The file-backend loads the whole ontology in memory, and hence the
>>> size of ontologies in Protege is limited by the maximum amount of
>>> memory that a Java VM can use (1.6 GB on Windows XP and 2 GB on most
>>> unix machines).
>>>
>>> The database backend loads in memory only the portion of ontology
>>> that the user needs, and uses caching and other mechanisms to
>>> optimize the ontology operations. For this reason, it is not memory
>>> intensive. The database backend is recommened for ontologies over
>>> 50K frames.
>>>
>>> You can read about the scalability and tuning of Protege on our
>>> wiki: http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning
>>>
>>> I'm working with the NCI thesaurus using the database backend with
>>> only 100MB heap size.
>>>
>>> Tania
>>>
>>>
>>> Alan March wrote:
>>>
>>>  
>>>
>>>> I've suffered this problem also, and resolved it on the same lines as
>>>> suggested here. Nevertheless, considering that ontologies and
>>>> automated
>>>> reasoning seem to be configuring the best solution to developing and
>>>> mantaining terminologies, and that biomedical terminologies tend to
>>>> be huge,
>>>> couldn't this problem be tackled by the developers of protégé so
>>>> that the
>>>> process becomes less dependent on RAM memory? I'm thinking of
>>>> paging to disk
>>>> and similars. Unfortunately, I am not proficient in Java and can
>>>> offer very
>>>> little help. But I've been working extensively with ontologies and
>>>> find
>>>> Protégé to be quite superior to other tools, including commercial
>>>> tools such
>>>> as Semantic Works and the like.
>>>>
>>>> I feel that this RAM memory problem could deterr a more widespread
>>>> adoption
>>>> of Protégé as **the** tool for ontology management. Raj's comment
>>>> regarding
>>>> the problem of Protégé failing at 80 megs files with 1.5 GB RAM is
>>>> something
>>>> I didn't know about as I have not as yet reached such sizes. But I
>>>> eventually will. I thought that using the database backend would
>>>> help, but I
>>>> just couldn't even use Fact++ or Pellet under such conditions. So I
>>>> think
>>>> there is an issue here: ontologies will grow large. Just look a the
>>>> sheer
>>>> size of Snomed and how it would benefit from some ontological
>>>> revamping. But
>>>> I can't imagine using Protégé if the OutOfMemory problem remains a
>>>> problem.
>>>>
>>>>
>>>>
>>>>  
>>>>
>>>>> -----Original Message-----
>>>>> From: [hidden email]
>>>>> [mailto:[hidden email]] On Behalf Of
>>>>> Mudunuri, Raj
>>>>> Sent: Monday, July 03, 2006 11:53 AM
>>>>> To: [hidden email]
>>>>> Subject: [protege-owl] AW: Re: OutOfMemory
>>>>>
>>>>> Well, I think it depends on the size of the ontology that you are
>>>>> dealing with... for example if one tries to classify thesaurus.owl
>>>>> (from NCI-Oncology) which is around 80 MB, even 1.5 GB of RAM is
>>>>> not enough...
>>>>>
>>>>> Cheers,
>>>>> Raj
>>>>>
>>>>>
>>>>> -----Ursprüngliche Nachricht-----
>>>>> Von: [hidden email]
>>>>> [mailto:[hidden email]] Im Auftrag von
>>>>> Nikolaj Berntsen
>>>>> Gesendet: Montag, 3. Juli 2006 16:11
>>>>> An: [hidden email]
>>>>> Betreff: [protege-owl] Re: OutOfMemory
>>>>>
>>>>> Congmin min wrote:
>>>>>
>>>>>  
>>>>>    
>>>>>
>>>>>> I am using Protege beta 3.2. I frequently had the
>>>>>> OutOfMemeoryError message, if the ontology is a little bigger.
>>>>>>
>>>>>> Is there any way I can fix this problem?
>>>>>>          
>>>>>
>>>>>
>>>>> I think its documented somewhere, but here goes:
>>>>>
>>>>> edit <protege_dir>/Protege.lax
>>>>> lax.nl.java.option.java.heap.size.max=200000000
>>>>>
>>>>> 200000000 is my latest attempt to find a number that works for me.
>>>>>
>>>>> Cheers,
>>>>> /\
>>>>>    
>>>>
>>>
>>>
>>> -------------------------------------------------------------------------
>>>
>>> To unsubscribe go to
>>> http://protege.stanford.edu/community/subscribe.html
>>>
>>>
>>>  
>>>
>>
>> -------------------------------------------------------------------------
>>
>> To unsubscribe go to
>> http://protege.stanford.edu/community/subscribe.html
>
>
>

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] ProtegeAPI import ontology

Amelie Marseille [MSc BIX]
Please, I need to know how to import an ontology to a current ontology from java code, using the Protege-OWL API. I really don't find this information in the documantation.
 
Thank you very much,
 
amelie.


winmail.dat (4K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: ProtegeAPI import ontology

Tania Tudorache
Amelie,

I have answered your question on the Protege discussion list. Please ask
in future OWL related question on this list.

I attached my answer below.

Tania

------------

Amelie,

...

You can use the following API calls to do an import programmatically:
   
OWLModel owlModel = (OWLModel) kb;      
String path = "c://Documents and Settings//ttania//Protege//projects//imported.owl";
URI uri = URIUtilities.createURI(path);

//create a new import helper with a JenaOWLModel as argument
ImportHelper importHelper = new ImportHelper((JenaOWLModel)owlModel);

//add the URI to the imported URIs
importHelper.addImport(uri);

try {
// do the actual import
   importHelper.importOntologies(false);
} catch (Exception e) {  
  //handle the exception    
  e.printStackTrace();
}


Please ask in future OWL related questions on the OWL discussion list.

Thanks,
Tania

-----------


Amelie Marseille [MSc BIX] wrote:

>Please, I need to know how to import an ontology to a current ontology from java code, using the Protege-OWL API. I really don't find this information in the documantation.
>
>Thank you very much,
>
>amelie.
>
>  
>

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Problems with Thesaurus.owl out of memory error

Julia Dmitrieva
In reply to this post by Tania Tudorache
Hello Tania,
Thank you for you answer.

The problem is that I can not load it.

Before I can convert it I need to load this Thesaurus.owl
At this moment I have extended my physical memory to
2000M  and I have changed max memory in Protege.lax to:
lax.nl.java.option.java.heap.size.max=1300000000
After that I have started the prptege and have the
the following user interface of the protege.
Look please to the attachment. You see it is not
possible to work with empty interface.
Did I have I still not enough physical memory or
I am doing something wrong?

With best regards,
Julia

Tania Tudorache wrote:

> Julia,
>
> First you need to load file-based project into Protege (for this you
> need, I think, more than 500MB of heap size),  then you select from the
> File menu -> Convert Project to format -> OWL/RDF database and follow
> the wizard steps.
>
> This is also documented here (with older screenshots):
> http://protege.stanford.edu/doc/users_guide/projects/saving_a_database_project.html 
>
>
> You need also a database engine (for example, MSAccess, or MySQL, etc.).
> Don't forget to copy the drivers into the Protege directory (as
> described in the link from above).
>
> Save the project under a different name than the file based project and
> the next time open the pprj file that you have saved and you will be
> working with the database, rather than the file based ontology.
>
> Tania
>
>
>
> Julia Dmitrieva wrote:
>
>> Hello Tania,
>>
>> could you please explane me (or sent me to some tutorial...)how to
>> convert thesaurus.owl in database format.
>> I know how to load the database, but I have ndea about
>> the structure of the database.
>> Thus I have to create table with the format that Protege can read.
>>
>> With best regards,
>> Julia

Protege.bmp (59K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

[protege-owl] AW: Re: AW: Re: AW: Re: OutOfMemory

Rajverma
In reply to this post by Tania Tudorache
Hi Tania,

Thanq for your reply, and nice to know that there is a better way to handle large owl ontologies... however, I'm interested to know whether there is a possibility of using reasoning services on these owl-database ontologies!! Does Racer support reasoning for owl-database ontologies? If yes, then how different is it when compared to the reasoning style of notmal owl ontologies? Is there any detailed documentation on this topic?

Cheers,
Raj



-----Ursprüngliche Nachricht-----
Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
Gesendet: Dienstag, 4. Juli 2006 18:22
An: [hidden email]
Betreff: [protege-owl] Re: AW: Re: AW: Re: OutOfMemory

Raj,

I am using the NCI OWL ontology with the OWL/RDF database backend. And
believe me, it is working :)

The first time, you will have to convert the OWL file-based ontology
into the OWL/RDF database format, and this operation takes some time,
but this is only a one time step. After that you can use the NCI
Thesaurus with only 100MB heap size and it also loads pretty fast (less
then 1 minute). The GUI operation in database mode is, of course, not as
fast as in file-based mode.

Tania



Mudunuri, Raj wrote:

>Hi Tania,
>
>You said,
>
>  
>
>>I'm working with the NCI thesaurus using the database backend with only 100MB heap size.
>>    
>>
>
>I'm interested to know whether you are using owl based NCI thesaurus with database backend OR clips based NCI thesaurus with database backend! B'cos I don't know whether there is a way to use large owl ontologies with a 'database backend'!! If yes, could you point to some related sources...
>
>Cheers,
>Raj
>
>
>
>-----Ursprüngliche Nachricht-----
>Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
>Gesendet: Dienstag, 4. Juli 2006 02:31
>An: [hidden email]
>Betreff: [protege-owl] Re: AW: Re: OutOfMemory
>
>Alan,
>
>For very large ontologies, like the NCI thesaurus, you should use the
>database backend, not the file-based backend.
>The file-backend loads the whole ontology in memory, and hence the size
>of ontologies in Protege is limited by the maximum amount of memory that
>a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix machines).
>
>The database backend loads in memory only the portion of ontology that
>the user needs, and uses caching and other mechanisms to optimize the
>ontology operations. For this reason, it is not memory intensive. The
>database backend is recommened for ontologies over 50K frames.
>
>You can read about the scalability and tuning of Protege on our wiki:
>http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning
>
>I'm working with the NCI thesaurus using the database backend with only
>100MB heap size.
>
>Tania
>
>
>Alan March wrote:
>
>  
>
>>I've suffered this problem also, and resolved it on the same lines as
>>suggested here. Nevertheless, considering that ontologies and automated
>>reasoning seem to be configuring the best solution to developing and
>>mantaining terminologies, and that biomedical terminologies tend to be huge,
>>couldn't this problem be tackled by the developers of protégé so that the
>>process becomes less dependent on RAM memory? I'm thinking of paging to disk
>>and similars. Unfortunately, I am not proficient in Java and can offer very
>>little help. But I've been working extensively with ontologies and find
>>Protégé to be quite superior to other tools, including commercial tools such
>>as Semantic Works and the like.
>>
>>I feel that this RAM memory problem could deterr a more widespread adoption
>>of Protégé as **the** tool for ontology management. Raj's comment regarding
>>the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
>>I didn't know about as I have not as yet reached such sizes. But I
>>eventually will. I thought that using the database backend would help, but I
>>just couldn't even use Fact++ or Pellet under such conditions. So I think
>>there is an issue here: ontologies will grow large. Just look a the sheer
>>size of Snomed and how it would benefit from some ontological revamping. But
>>I can't imagine using Protégé if the OutOfMemory problem remains a problem.
>>
>>
>>
>>    
>>
>>>-----Original Message-----
>>>From: [hidden email]
>>>[mailto:[hidden email]] On Behalf Of
>>>Mudunuri, Raj
>>>Sent: Monday, July 03, 2006 11:53 AM
>>>To: [hidden email]
>>>Subject: [protege-owl] AW: Re: OutOfMemory
>>>
>>>Well, I think it depends on the size of the ontology that you
>>>are dealing with... for example if one tries to classify
>>>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>>>1.5 GB of RAM is not enough...
>>>
>>>Cheers,
>>>Raj
>>>
>>>
>>>-----Ursprüngliche Nachricht-----
>>>Von: [hidden email]
>>>[mailto:[hidden email]] Im Auftrag
>>>von Nikolaj Berntsen
>>>Gesendet: Montag, 3. Juli 2006 16:11
>>>An: [hidden email]
>>>Betreff: [protege-owl] Re: OutOfMemory
>>>
>>>Congmin min wrote:
>>>
>>>  
>>>
>>>      
>>>
>>>>I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>>message, if the ontology is a little bigger.
>>>>
>>>>Is there any way I can fix this problem?
>>>>    
>>>>
>>>>        
>>>>
>>>I think its documented somewhere, but here goes:
>>>
>>>edit <protege_dir>/Protege.lax
>>>lax.nl.java.option.java.heap.size.max=200000000
>>>
>>>200000000 is my latest attempt to find a number that works for me.
>>>
>>>Cheers,
>>>/\
>>>      

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: AW: Re: AW: Re: AW: Re: OutOfMemory

Jeena Maltuvati
Hi list,
 
i have the same question as of raj,....
>>
Thanq for your reply, and nice to know that there is a better way to handle large owl ontologies... however, I'm interested to know whether there is a possibility of using reasoning services on these owl-database ontologies!! Does Racer support reasoning for owl-database ontologies? If yes, then how different is it when compared to the reasoning style of normal owl ontologies? Is there any detailed documentation on this topic?
>.
 
 
secondly tania , could you please tell us, which database did you use...my sql or ms access?
 
does it really matters to make a choice between them?in terms of handling number of records i think my sql is much better. But for the sake of just getting rid of memory problem does it makes any difference?
 
Thirdly i have very basic question to double check if its right that NCIoncology.owl is OWL lite and THESARUS.OWL is OWL Full? is it correct? if yes, well it cause any error if i try to classify "OWL Full" as "OWL DL" or "OWL Lite"...or for any other possible case.?
 
Thanks in advance.
 
Jeena

 
On 7/11/06, Mudunuri, Raj <[hidden email]> wrote:
Hi Tania,

Thanq for your reply, and nice to know that there is a better way to handle large owl ontologies... however, I'm interested to know whether there is a possibility of using reasoning services on these owl-database ontologies!! Does Racer support reasoning for owl-database ontologies? If yes, then how different is it when compared to the reasoning style of notmal owl ontologies? Is there any detailed documentation on this topic?

Cheers,
Raj



-----Ursprüngliche Nachricht-----
Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
Gesendet: Dienstag, 4. Juli 2006 18:22
An: [hidden email]
Betreff: [protege-owl] Re: AW: Re: AW: Re: OutOfMemory

Raj,

I am using the NCI OWL ontology with the OWL/RDF database backend. And
believe me, it is working :)

The first time, you will have to convert the OWL file-based ontology
into the OWL/RDF database format, and this operation takes some time,
but this is only a one time step. After that you can use the NCI
Thesaurus with only 100MB heap size and it also loads pretty fast (less
then 1 minute). The GUI operation in database mode is, of course, not as
fast as in file-based mode.

Tania



Mudunuri, Raj wrote:

>Hi Tania,
>
>You said,
>
>
>
>>I'm working with the NCI thesaurus using the database backend with only 100MB heap size.
>>
>>
>
>I'm interested to know whether you are using owl based NCI thesaurus with database backend OR clips based NCI thesaurus with database backend! B'cos I don't know whether there is a way to use large owl ontologies with a 'database backend'!! If yes, could you point to some related sources...
>
>Cheers,
>Raj
>
>
>
>-----Ursprüngliche Nachricht-----
>Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Tania Tudorache
>Gesendet: Dienstag, 4. Juli 2006 02:31
>An: [hidden email]
>Betreff: [protege-owl] Re: AW: Re: OutOfMemory
>
>Alan,
>
>For very large ontologies, like the NCI thesaurus, you should use the
>database backend, not the file-based backend.
>The file-backend loads the whole ontology in memory, and hence the size
>of ontologies in Protege is limited by the maximum amount of memory that
>a Java VM can use (1.6 GB on Windows XP and 2 GB on most unix machines).
>
>The database backend loads in memory only the portion of ontology that
>the user needs, and uses caching and other mechanisms to optimize the
>ontology operations. For this reason, it is not memory intensive. The
>database backend is recommened for ontologies over 50K frames.
>
>You can read about the scalability and tuning of Protege on our wiki:
>http://protege.cim3.net/cgi-bin/wiki.pl?ScalabilityAndTuning
>
>I'm working with the NCI thesaurus using the database backend with only
>100MB heap size.
>
>Tania
>
>
>Alan March wrote:
>
>
>
>>I've suffered this problem also, and resolved it on the same lines as
>>suggested here. Nevertheless, considering that ontologies and automated
>>reasoning seem to be configuring the best solution to developing and
>>mantaining terminologies, and that biomedical terminologies tend to be huge,
>>couldn't this problem be tackled by the developers of protégé so that the
>>process becomes less dependent on RAM memory? I'm thinking of paging to disk
>>and similars. Unfortunately, I am not proficient in Java and can offer very
>>little help. But I've been working extensively with ontologies and find
>>Protégé to be quite superior to other tools, including commercial tools such
>>as Semantic Works and the like.
>>
>>I feel that this RAM memory problem could deterr a more widespread adoption
>>of Protégé as **the** tool for ontology management. Raj's comment regarding
>>the problem of Protégé failing at 80 megs files with 1.5 GB RAM is something
>>I didn't know about as I have not as yet reached such sizes. But I
>>eventually will. I thought that using the database backend would help, but I
>>just couldn't even use Fact++ or Pellet under such conditions. So I think
>>there is an issue here: ontologies will grow large. Just look a the sheer
>>size of Snomed and how it would benefit from some ontological revamping. But
>>I can't imagine using Protégé if the OutOfMemory problem remains a problem.
>>
>>
>>
>>
>>
>>>-----Original Message-----
>>>From: [hidden email]
>>>[mailto:[hidden email]] On Behalf Of
>>>Mudunuri, Raj
>>>Sent: Monday, July 03, 2006 11:53 AM
>>>To: [hidden email]

>>>Subject: [protege-owl] AW: Re: OutOfMemory
>>>
>>>Well, I think it depends on the size of the ontology that you
>>>are dealing with... for example if one tries to classify
>>>thesaurus.owl (from NCI-Oncology) which is around 80 MB, even
>>>1.5 GB of RAM is not enough...
>>>
>>>Cheers,
>>>Raj
>>>
>>>
>>>-----Ursprüngliche Nachricht-----
>>>Von: [hidden email]
>>>[mailto:[hidden email]] Im Auftrag
>>>von Nikolaj Berntsen
>>>Gesendet: Montag, 3. Juli 2006 16:11
>>>An: [hidden email]

>>>Betreff: [protege-owl] Re: OutOfMemory
>>>
>>>Congmin min wrote:
>>>
>>>
>>>
>>>
>>>
>>>>I am using Protege beta 3.2. I frequently had the OutOfMemeoryError
>>>>message, if the ontology is a little bigger.
>>>>
>>>>Is there any way I can fix this problem?

>>>>
>>>>
>>>>
>>>>
>>>I think its documented somewhere, but here goes:
>>>
>>>edit <protege_dir>/Protege.lax
>>>lax.nl.java.option.java.heap.size.max=200000000
>>>
>>>200000000 is my latest attempt to find a number that works for me.
>>>
>>>Cheers,
>>>/\
>>>

-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html


Reply | Threaded
Open this post in threaded view
|

[protege-owl] Adding rules...

Pyrococcus Furiosus
In reply to this post by Tania Tudorache
Hi all!
I've built an Interior Design ontology using
protege-owl!
Now i would like to add some rules to this ontology,
rules like:
if the room has a south exposure the furnishing has to
be of a color that is a cold color

room, exposure, furniture and color are all classes in
the ontology and hasExposure and hasColor are
ObjectProperties!
Thank you very much in advance for your advices,
rob


Chiacchiera con i tuoi amici in tempo reale!
 http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com 
-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

Reply | Threaded
Open this post in threaded view
|

[protege-owl] Re: Adding rules...

Kaarel Kaljurand
Hi,

On 8/24/06, Roberto Grimaldi <[hidden email]> wrote:

> Hi all!
> I've built an Interior Design ontology using
> protege-owl!
> Now i would like to add some rules to this ontology,
> rules like:
> if the room has a south exposure the furnishing has to
> be of a color that is a cold color
>
> room, exposure, furniture and color are all classes in
> the ontology and hasExposure and hasColor are
> ObjectProperties!

I'm not sure that you need the semantic expressivity of rules (SWRL)
to encode this. (Rule syntax might be easier to work with though).

In (pseudo) natural language I would encode your example as:

If a room faces South then the room uses some furniture that has-color Cold.
(or: Every room that faces South uses some furniture that has-color Cold.)

If a room faces South and the room uses some furniture then it has-color Cold.
(or: All furniture that is used by a room that faces South has-color Cold.)

The first sentence places the someValuesFrom restriction, the second
places the allValuesFrom restriction.
The proper names South and Cold map to OWL individuals. "is used by"
is the inverse property of "uses".

--
kaarel
-------------------------------------------------------------------------
To unsubscribe go to http://protege.stanford.edu/community/subscribe.html

12