1

I have built an alloy program to check some rules of a system. Now I want to check the rules in the real system. For this I have all the data from the real system. For example,

My alloy tool check the rules of finding the mutual friend between two user. I can do it by giving some simple rule in alloy. Now, I want to test it in a big dataset. Suppose, I have facebook friend dataset. Now, how can I give my facebook dataset to alloy for finding the mutual friend of the Facebook user using Alloy.

Will it be possible to write a wrapper of javascript/python/java to make a link between my alloy analyzer and json data?

faysal
  • 162
  • 2
  • 14

2 Answers2

1

IMHO, Alloy is not suitable to perform analysis on large data sets.

That being said, I believe that you could use aRby, an embedding of Alloy in Ruby that will allow you not only to script the importation of your dataset, but also to specify so called "partial instances", i.e., instances, partially filled with your data, which can be completed with analysis.

If you prefere to use Java, you could work directly with the Alloy API (add the Alloy jar to your project buildpath).

Be sure that scalability is not a key requirement before starting, as both approaches will certainly require you to put a certain amount of work and dedication in their implementation.

Loïc Gammaitoni
  • 4,173
  • 16
  • 39
  • Thanks for your help and references. I would like to first use Java as I am comfortable with this language compare to Ruby. Can you point me to some sample code which will be helpful for solving my problem? So far, I have downloaded the Jar file from [Alloy 4.2 jar](http://alloy.lcs.mit.edu/alloy/download.html). – faysal May 06 '19 at 18:35
  • In the jar you downloaded, the package `edu.mit.csail.sdg.alloy4whole` contains examples. `ExampleUsingTheAPI.java` can be a good start. – Loïc Gammaitoni May 07 '19 at 12:49
0

I've been working on a proper API on Alloy, it is currently in a PR on Github. With this API it is quite easy to use Alloy as a verifier.

In contrast to Loïc's answer, I think you ran this easily on relatively large datasets because the slow part is the solver that tries to find an instance. If you create an instance it is easy to verify that it conforms to all the rules.

Clearly, you can use the existing API as well but the newly designed API is intended for your and other purposes. Clearly, this is work in progress but you can always contact me when there are problems.

Peter Kriens
  • 15,196
  • 1
  • 37
  • 55
  • Thank you. I appreciate your offer for helping me. I went through your github site. Do you have any example on how to use this API in the project? – faysal May 06 '19 at 18:45
  • As said, it is work in progress. However, you can look at the `org.alloytools.alloy.classic.test` project. This contains a number of test cases. I've tried to make the javadoc in the `org.alloytools.alloy.api` project as clear as possible. – Peter Kriens May 07 '19 at 06:44
  • In real life, "big data set" as OP mentioned can easily reach million entries. Last time I tried to parse a not even remotely large xml file into an A4Solution I faced scalability issues :https://stackoverflow.com/questions/35703242/capacityexceededexception-when-reading-a-very-large-instance-using-a4solutionrea – Loïc Gammaitoni May 07 '19 at 12:46
  • Also, I believe OP is interested in doing more than simple verification (e.g. deriving Mutual Friends from friendship relations). – Loïc Gammaitoni May 07 '19 at 12:53
  • I think these are valid use cases so we should try to make them work. I've defined interfaces for the important classes. This makes it theoretically possible to define 'lazy' TupleSet mapped by files. It is one of the drivers of the API attempt. Just a small matter of time. – Peter Kriens May 07 '19 at 19:34