Saturday, June 29, 2019

OWL, SWRL, & Protege Tips #1: Removing Ontology Prefixes from SWRL rules

I've been following the user support list for Protege for quite a while now and I've noticed that some problems seem to come up frequently. This is the first of what I plan to be a series of posts about little tricks, common problems, and resolutions that I think will be useful for many new users of OWL, SWRL, and Protege.

The first one deals with a common issue with SWRL rules displayed in the Protege SWRLTab. Often the rules will be displayed with the name of the ontology (or "autogen0") as a prefix to every terminal in the rule.

For example, in a simple car ontology that I created to demonstrate some concepts to a new user I had a SWRL rule to define that a Person canDrive if they have a car:

 Person(?p) ^ hasCar(?p, ?c) -> canDrive(?p, true)

However, after running the reasoner, the rule then looked like this:

untitled-ontology-165:Person(?p) ^ untitled-ontology-165:hasCar(?p, ?c) -> untitled-ontology-165:canDrive(?p, true)

In all cases this is not a serious problem. The prefix does not effect the reasoning and one can even edit the rules without using the prefix and they will just be added in automatically. However, the prefixes look confusing and somewhat negate one of the main goals of rules which is that they should be intuitive descriptions of business logic.

The solution is simple but it requires something that I try to avoid like the plague which is to edit your OWL file with a text editor rather than with Protege. I typically use Notepad on Windows. You should either use Notepad or some similar very simple text editor. You don't want to use something like MS Word or any other WYSIWYG word processor because of course those don't edit plain text, they look for (and may insert if they don't find it) statements in markup languages like RTF which will of course destroy your OWL file.

Also, you should make sure to turn off all the "smart" options on your text editor. The first time I did this I used the default text editor on a Mac and the default for that editor was to replace all quotes with "smart quotes" which made the OWL file impossible to parse.

Open your OWL file in the text editor and look for a mapping statement like this:

xmlns:untitled-ontology-165="http://www.semanticweb.org/michaeldebellis/ontologies/2019/3/untitled-ontology-165#"

In my experience Protege often inserts a statement like this for you in an ontology even though you don't need it. The statement should be near the top of your file. You can just remove that statement and then save your file. Note: make sure that your saved file still has a ".owl" file extension. When I used Notepad just now it saved my file as a text file and I had to edit the extension to make it an OWL file again.

After the change when I loaded the edited ontology and ran the reasoner the rule now looks as I would expect it to:

Person(?p) ^ hasCar(?p, ?c) -> canDrive(?p, true)

The unwanted prefixes have been removed. Thanks to Martin O'Connor for helping me figure this out a long time ago. The archived email post with Marin's answer can be found here: Archived Post from Protege List. 

Tuesday, July 31, 2018

Universal Moral Grammar (UMG) Ontology

PLEASE NOTE: This website is no longer my official blog. I keep it here because in some older papers I pointed people to it. Please see the same post at https://www.michaeldebellis.com/post/umg_ontology and for other newer posts see my current blog at: https://www.michaeldebellis.com/blog 

In his book Moral Minds, Marc Hauser hypothesized the existence of what he termed a Universal Moral Grammar (UMG):

I argue that our moral faculty is equipped with a universal moral grammar, a toolkit for building specific moral systems. Once we have acquired our culture’s specific moral norms… we judge whether actions are permissible, obligatory, or forbidden, without conscious reasoning and without explicit access to the underlying principles. 

I have developed an ontology that is a formal model of a UMG. In September of 2018 I presented this paper on the ontology at the Semantics 2018 conference in Vienna: A Universal Moral Grammar (UMG) Ontology

This page consists of additional materials to support that paper such as a link to the actual ontology as well as a much extended version of the Semantics 2018 paper. I’m also pleased to say that the UMG ontology below won the award for Best Vocabulary at the Semantics 2018 Vocabulary Carnival.

Here is a PDF of the presentation that I gave at the Semantics 2018 Conference: Semantics 2018 Presentation

Here is the OWL version of the ontology: UMG-Ontology-8-30-18.OWL

This is a significantly longer version of the Semantics 2018 paper:  UMG Extended Paper

Here is a link to the ontology in Web Protege:  UMG Ontology in Web Protege

The UMG ontology is released under the Creative Commons Attribution 4.0 License

Note that to view the link above you need to first set up an account on the Stanford WebProtege server. Setting up an account is easy, all you need to do is to provide a user name, email, and password. To set up an account go here: https://webprotege.stanford.edu/#accounts/new

You need to setup the account and login before you click on the link to the ontology. Also, note that if you wish to download the ontology from WebProtege you can do that. Click on the History icon (the far right one in the group starting with Classes, Properties, etc.) This will show a list of revisions. Select the latest revision (there should be a little grey icon in the right corner of the history like "R1") and then select the option "Download revision 1" from the pop-up menu.

For questions and comments please feel free to add a comment here, I check comments regularly and usually respond within a day.  Also, feel free to contact me directly. My email is included at the top of the Semantics 2018 Paper and the  UMG Extended Paper.

Thursday, July 5, 2018

Using Excel's Matrix Operations for Evolutionary Game Theory

I've been reading John Maynard-Smith's classic book Evolution and the Theory of Games. It reminds me of Syntactic Structures by Chomsky in that it's a very short little book but it takes more effort to really understand it than books that are orders of magnitude larger. One thing I've found that helps me to understand complex topics is to develop some model as I'm reading. I've developed several OWL ontologies as I was reading up on a topic just to help me get clear on various concepts. In this case I started with a wonderful Gnu tool called Octave which is a scaled down (but still very powerful) free version of MatLab and does matrix and linear algebra computations. But as I was working it occurred to me it would be much more re-usable and amenable to "what if" games if I had the info in spreadsheet format rather than in the format that Octave uses. So just for grins I googled "matrix operations in Excel" and was amazed at how well Excel supports matrix operations now (I haven't used Excel in quite a while).

I've developed the following spread sheet: Evolutionary Game Theory in Excel  (Note: you can download the spreadsheet by right clicking and selecting "Save Link As")

So far I've implemented Appendices A, B, and D in Maynard Smith's book. Appendix C is a proof which I don't think can be implemented in a spreadsheet. Appendix A shows how to calculate the payoff between two players when each player has a different payoff matrix. The payoff for player 1 is "P Payoff" and for player 2 is "Q Payoff".  The strategy for player one is the column vector p and for player two the column vector q. So if p = [.3; .6; .1] it means that player 1 plays Hawk 30% of the time, Dove 60% and Retreat 10%. Note I'm using the same notation here as Octave where a ";" means a new column and a "," means another element in the same row. So the Transpose of p (denoted p') would be  [.3, .6, .1].

Appendix B shows how to compute the ESS (which is also a Nash Equilibrium) for a two person game that meets the requirements in the book.  That computation uses the same P Payoff matrix as for Appendix A so the computation is only valid when there are two strategies, i.e., when one strategy is dominated by another and hence can be eliminated. In the case as I've set it up in the spreadsheet Dove dominates Retreat so the formula holds. Note: as I read further in the book I realized that R stands for Retaliate which is quite different from Retreat, however I'm leaving the spreadsheet as is because with R dominated by D the ESS formula is valid. If you change the value of R so that it is no longer dominated (e.g., to make it consistent with Retaliate) the spreadsheet will still give values for the ESS but they won't be valid since that formula only works with two strategy games.

Appendix D (sheet 2, A and B are in sheet 1) shows the fitness W for a player playing a pure strategy. It also shows the mean fitness for the population based on which percent are playing which strategy (again the P Payoff matrix is used). It also computes the fitness for the next iteration of the game for each group playing the particular strategy. Note that when R is dominated by D and you compute the ESS you can play "what if" by setting the values for H and D to be different than the ESS and you will see that in the next round they are converging toward the ESS. E.g., if percentage of H is greater than the ESS and percentage of D is less than the ESS in the next iteration the percentage of H will decrease and the percentage of D will increase. This will of course continue until they reach the ESS at which point it will be stable.

I thought this spreadsheet might be useful for others trying to learn the topic as well as for people who want to start doing actual modeling. The current example has 3 strategies but it would be trivial to expand it to more. I have some ideas on altruism that I want to try out with game theory modeling and I think this spreadsheet is a good first step toward what I will need and I hope it may help others as well. Also, there is always a chance I made some error so that's another reason to post it, if anyone finds any errors please let me know. You can comment below, I usually respond to comments within a day.

Wednesday, June 7, 2017

SWRL Process Modeling Tutorial

Please Note that this blog is no longer active.  For my new blog please go to: https://www.michaeldebellis.com/blog     I have published a new version of the classic Protege/OWL Pizza tutorial that includes SWRL, SHACL, and SPARQL introductions:  https://www.michaeldebellis.com/post/new-protege-pizza-tutorial and the original SWRL process model can be found here:  https://www.michaeldebellis.com/post/swrl_tutorial 

The following is a tutorial for using the Semantic Web Rule Language (SWRL) with the Protege ontology editor.  I chose the process modeling domain because I think it is something many people can relate to.  Also, it's a simple example that highlights some of the powerful mathematical capabilities that are a result of the set theoretic foundation of OWL and SWRL. At least that was my hope.

Here is the PDF of the tutorial: SWRL Process Modeling Tutorial

The initial ontology to start the tutorial is here: SWRLProcessTutorialStart-V2.owl

The final version of the ontology, with an example waterfall model is here: SWRLProcessTutorialFinal.owl

Thanks to all the people on the Protege user support list for answering my endless stream of questions, special thanks to Martin O'Connor.

If you have questions or comments about the tutorial feel free to email me: mdebellissf@gmail.com

Also, the following document has nothing to do with SWRL but it is something I think many new comers to working with Protege and OWL might find useful. OWL (the language underneath Protege) is based on logic and set theory. For those who don't know or are rusty on those concepts here is a PDF that is a good overview of the basics. Don't be misled by the cover page, this is from a book on Mathematical Methods in Linguistics but this is just the first chapter which is a nice overview of logic and set theory:  Partee, et. al. Basic Concepts of Set Theory




Sunday, June 4, 2017

Welcome: What's in a name?

It's not easy finding available names on the Internet these days. After multiple failures I finally found one that was available and that I didn't hate. I was a Symbolics hacker at one point. The Symbolics Machine was an AI workstation. Symbolics computers were the first commercial versions of the MIT Lisp machines. They had a three button mouse, a space cadet keyboard, and a huge bitmap graphic monitor when most computers had green screen terminals or DOS style user interfaces.

Indeed, at Andersen Consulting where I did the majority of my Symbolics hacking the standard training that I had to go through before I got to play on my Lisp machine was programming in COBOL via punch cards, not even a monitor. Everything in the Symbolics operating system and networking environment was written in object-oriented Lisp that developers had full access to. It was a real joy to work on and we could develop amazing software very rapidly. So that was my choice for the blog name.

However, I also liked the name because lately I've been interested in the work of Terrance Deacon an anthropologist at UC Berkeley and through him the work of Charles Sanders Peirce and the concept of symbols and how important they are to human cognition is an essential idea for both of them.

I began by studying philosophy. As I did that I became interested in Artificial Intelligence and that is how I ended up developing on a Symbolics. Later in my career I did R&D on software engineering and formal methods. One of my most interesting jobs was as a principle investigator for the USAF's Knowledge-Based Software Assistant program.

In the past few years I've self educated myself on a number of diverse but related topics on philosophy and science.  I've learned to use the Protege ontology editor from Stanford and think it is an amazing tool.  This blog will be for discussion of all sorts of topics from practical uses of OWL and the semantic web to abstract issues of philosophy, mathematics, and science. I started the blog when I was auditing some courses at UC Berkeley and haven't used it in a while but I'm using it to post a SWRL tutorial I just developed and plan to post more frequently in the future.

I'm currently doing research in ethics which I think is somewhere between philosophy and cognitive science.  I've used Protege to create a formal model of what Marc Hauser calls a Universal Moral Grammar (UMG) and have applied the model to various scenarios and systems from ethical philosophy and evolutionary psychology.

Some of my papers, all published except the most current one on the UMG can be found at my academia.com site:  https://independent.academia.edu/MichaelDeBellis

Monday, October 26, 2015

Consciousness and The Interface Theory of Perception

Dan Luba passed the video down below to me. The speaker is Donald Hoffman and he is discussing what he calls the Interface Theory of Perception. Here are my reactions to it.


I think Hoffman makes some fundamental errors when he discusses evolution and cognition. He says that evolution doesn’t evolve us to perceive “the truth” but rather evolved us to only perceive what will help us adapt.

His first error is to think that this is some major revelation. It's not. In fact people like Dawkins and Robert Trivers have written at length about how (even though at first it seems counter intuitive) organisms can evolve to deceive themselves or otherwise have less than perfect knowledge about the external world. The most obvious example is in kin recognition. It is a much more harmful error (in the sense of reproductive success) for a mother to fail to recognize her child than to make the reverse error. Hence most females are tuned to recognize an organism as her child even when there seems to be strong evidence against that fact. Birds like the cuckoo take advantage of this bias by laying their eggs in the nests of other birds.  Even though the cuckoo baby often looks nothing at all like the other actual children the mother bird will usually adopt the cuckoo as her own, even when the Cuckoo is much larger than her adopted siblings and is taking far more than a fair share of the food.

There are many other examples. Trivers has an excellent book on the subject called The Folly of Fools: the logic of human self deception.

Hoffman’s second error is that he doesn’t understand that while it is correct that organisms didn’t evolve to have optimal information of the external world the kinds of errors that they make are mostly understandable and predictable. He speaks as if the rational conclusion of the fact that we don’t have optimal knowledge is to just say that all knowledge is suspect and should be discarded. That is clearly false.  For the majority of possible traits better information equals better adaptation. Predators evolve better sight. Prey evolve better hearing.

Indeed the optical illusions Hoffman starts his talk with are excellent examples that show that humans can understand and correct for the errors that evolution has saddled us with. Theories such as Evo-Devo as well as the standard Darwinian model of adaptation provide us with good models to explain and predict where imperfect knowledge will likely occur due to constraints on possible designs (the vertebrate eye example) or the adaptive advantages of imperfect knowledge (the cuckoo example).

The proper response to these issues is not to just assume that all existing information is wrong but rather to continue to try and understand why and where we may have errors in our perception and cognition.

What I found even more puzzling was that after saying that we have to throw out all of existing science and concepts such as causality,  Hoffman then proceeds to talk about causality and things like Markhov processes in regard to his new model. If causality is totally invalid then its as invalid in some new model as it is in existing models. If the external world and traditional math and physics are all an illusion then so is the science and math that assume there is an outside world which gave us things like Markhov models and quantum physics.

I will say that I think this topic is extremely interesting. For example, I think we can make a good case that the basic foundations for math as well as concepts such as causality and morality are innate cognitive mechanisms.  This leaves open the question: could we even know if there are alternative ways of conceiving the world? My suspicion is that the reason these mechanisms are innate is that they correspond to some universal truths about how the universe is organized and can be understood. I think it is a completely unjustified leap to go from the fact that there are minor and understandable biases in our faculties of understanding to the conclusion that we should completely discount them.

Friday, October 2, 2015

What's it like to be a Computer?

I recently audited a Philosophy of Mind seminar led by John Searle at UC Berkeley. We read Thomas Nagel's What is it like to be a Bat?  and it got me thinking about computer science. Nagel made me realize that we computer scientists are missing out on an essential aspect of what it means to be a computer. We know that most computers get input from the outside world through keyboards, cameras, and microphones. We know that they represent that world via objects, databases, logic, and eventually collections of bits. Clearly this is far different than our human methods for perceiving and representing the world. As Nagel says about bats so we must say about computers:
there is no reason to suppose that it is subjectively like anything we can experience or imagine. This appears to create difficulties for the notion of what it is like to be a [computer]. We must consider whether any method will permit us to extrapolate to the inner life of the [computer] from our own case, and if not what alternative methods there may be for understanding the notion.
As is probably obvious I don't really think we need to do anything more to understand what it's like to be a computer. My point is that Nagel's argument for why we need to wonder what it is like to be a bat seem as insubstantial as my juxtaposition for the computer.

I think what is going on here is an example of what Chomsky describes [Chomsky 1996, 2008] as trivial questions such as "do submarines swim?" In English submarines don't swim in Japanese they do but the question is not considered a conundrum for marine biologists. It's simply a question of a language convention. So in English (at least so far) few people wonder "what it is like" to be a computer. But we do wonder what it is like to be a bat. It is common in literature for people to turn into bats and frogs. We have a common sense idea that identity is not necessarily tied to a human brain. But common sense and intuition are not science. They may be the starting point for science. So that should be how we evaluate Nagel's question: are there any actual scientific issues he is getting at?

One of his primary criticisms is that consciousness can't be studied by a "materialist" or "physicalist" approach. I agree that a strictly materialist approach to studying consciousness won't work.  However, not for the reasons that Nagel advocates.  As Chomsky points out [Chomsky 2012], the mind-body distinction ceased to makes sense when Newton destroyed the mechanistic worldview on which it was based. Even more so in the modern world where the fundamental building blocks of "matter" are not sub-microscopic particles but wave functions.

Or consider fields such as computer science or computational linguistics. The concepts we deal with are grammars, languages, transformations, logic, state machines, Turing machines, ontologies, interfaces, etc. These aren't material except in the mundane sense that they can describe things and processes in the real world. However, they aren't materialistic concepts about electrical currents on silicon. Indeed most of those concepts can be implemented in highly diverse ways. A state machine can describe a software program or the call-response language of various mammals [Hauser 2003].   Several years ago I saw a fascinating paper presented by researchers at Stanford [Myers 2012] who showed that they could use RNA to store information exactly as one would store to a computer. They demonstrated this by showing how the PDF for their own paper was stored and retrieved via DNA in their lab. These examples show that Nagel's view of materialism is out dated and not relevant to what many people who study computation and cognition are doing. The modern sciences of cognition are "materialistic" only in the most trivial sense.

Now let us consider Nagel's emphasis on "reduction".  How can we possibly even begin to think about reducing a scientific theory of mind to biological concepts when we don't yet have a mature scientific theory of mind? As Chomsky points out [Chomsky 2002] we can't even map neural correlates of consciousness for animals whose behavior are several orders of magnitude less complex than humans such as bees. Why should we constrain scientists working on the far harder problem of human cognition that if they can't perform such a reduction their work is not worth doing?

This brings us to Nagel's general viewpoint on science and philosophy. He is in essence a science denier. If science leads to a conclusion that is uncomfortable then he prefers to reject the science. For example, in his book Mind and Cosmos [Nagel 2012]  on pages 26-27 referring to materialistic and evolutionary theories he says: "but the explanations they propose are not re-assuring enough". A few pages later in Mind and Cosmos on page 29 he says: “Everything we believe, even the most far flung cosmological theories has to be based ultimately on common sense and on what is plainly undeniable”.

The goal of science is not to re-assure us or to validate our common sense intuitions. Indeed, the history of science shows that some of the most important discoveries were resisted because they challenged the current world view and made us re-evaluate the place of humans in the universe. People still resist Darwin because they find it offensive to think that humans evolved from primates. The "far flung cosmological" theory of quantum entanglement undeniably violates our common sense notion of causality.

Based on the history of science I think it would be somewhat surprising if when we ultimately do have a mature scientific theory of mind it didn't make people feel somewhat uncomfortable by forcing us to rethink common sense notions of consciousness such as free will.

Finally, I wish to close with a quote from a rather unrelated text.  I'm also auditing a quite different class on philosophy of mathematics and for that class today I was reading Frege's Foundations of Arithmetic. And I hope this doesn't seem overly harsh, I have great regard for Nagel, he is clearly a very influential philosopher, but as I was reading the introduction to Frege I couldn't help but think of Nagel as I read the following:
If Frege goes too far... he is certainly on the side of the angels when he espouses as a model for philosophy the defense of objective scientific truth in matters of conceptual clarification. He is surely right to oppose the supine subjectivism that seems to think we can say whatever we want merely by articulating unargued opinions in the course of creating a literary creative writing exercise. That is not philosophy for Frege... [Jacquette 2007]
Amen brother.

Bibliography

Chomsky, Noam (1996) Language and Thought: Some Reflections on Venerable Themes: Excerpted from Powers and Prospects.

Chomsky, Noam (2002) On Nature and Language. p. 56.

Chomsky, Noam (2008) Chomsky and His Critics. p. 279.

Chomsky, Noam (2012) The machine, the ghost, and the limits of understanding: Newton's contribution to the study of Mind. Lecture at the University of Oslo.

Hauser, Marc and Mark Konishi (2003) The Design of Animal Communication.

Jacquette, Dale (2007). Introduction and Critical Commentary to Foundations of Arithmetic by Gottlob Frege.

Myers, Andrew (2012) Totally RAD: Bioengineers create rewritable digital data storage in DNA. Stanford press release. Note: this is not the research I saw presented which was over ten years ago and unfortunately I can't recall that specific paper but the concept here is the same.

Nagel, Thomas (2012) Mind and Cosmos.