GitHub - tofgarion/spark-by-example at Community2019
source link: https://github.com/tofgarion/spark-by-example/tree/Community2019
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
SPARK by Example
SPARK by Example is a project (greatly) inspired by ACSL by Example, a collection of verified C functions and data types from the C++ standard library. SPARK by Example has two objectives
- implement and verify the same functions as ACSL by Example using SPARK 2014 (SPARK 2014 is a formally verified subset of the Ada programming language)
- highlight differences between verification of C programs and verification of SPARK programs
Adacore has developed a great tutorial website for Ada/SPARK beginners here. It is recommended to follow at least the SPARK part of this tutorial before reading SPARK by Example.
Tools used
GNAT Community 2019 has been used for this project. You may download and install it using the previous link.
Table of contents
Content of each directory and global Makefile
Each directory corresponds to a chapter of ACSL by Example. Each
directory contains a GNAT Project file (the file ending by .gpr
)
containing informations about the location of the sources
etc. Common configuration, e.g. GNATprove configuration, is
specified in the spark_by_example_shared.gpr
file at the project
root.
For each algorithm Algo
to be proved, you will find the
specification of Algo
in the algo_p.ads
file and its
implementation if the algo_p.adb
file. Specifications and
implementations are defined in packages. To avoid name clashes with
functions, packages names have a _P
suffixes, hence the file
names.
Ghost functions that may be used in several algorithms are grouped
in the spec
directory at the root of the project. Lemmas used in
proofs are grouped in the lemmas
directory at the root of the
project.
A Makefile is provided for each chapter with some default values
for provers timeouts, levels of proof etc. These default values are
sufficient to prove the corresponding algorithms with the previous
tools on a machine equipped with an Intel Core i7-4810MQ CPU and
16GB of RAM. For each chapter, a all
target proves all functions
and procedures of the chapter. Each function or procedure has a
corresponding target to prove it.
The following variables can be used to control the behaviour of the Makefiles:
CHAPTER_NAME_DEFAULT_TIMEOUT
(defined at the beginning of each Makefile) contains the default value for provers timeoutLEVEL_FUNCTION_NAME
defines for each function the level used byGNATprove
MEMCACHED=1
enables the use of a Memcached server to cache proofs (it speeds up the proof process as some lemmas are used in several function for instance). VariablesMEMCACHED_SERVER
andMEMCACHED_PORT
can be used to specify IP and port of server (localhost:11211
by default)STATS=1
enables the generation of statistics on the proof, based on the standard output. Generated files will be placed in thestats
directory at the root of the projectPARALLEL
controls the-j
option ofGNATprove
for parallelizing proofs (default0
, use all cores of the machine)DEBUG=1
enables debug and verbose options ofGNATprove
Other variables are available, see the Makefile-common
file at
the root of the project.
For instance, if you want to prove all functions of chapter on non-mutating algorithms with statistics and by changing the default timeout to 10, issue the following command from the project root:
make -C non-mutating all STATS=1 NON_MUTATING_DEFAULT_TIMEOUT="TIMEOUT=10"
A Makefile is also available at the root of the project. It has a
all-chapters
target that proves all the chapters, gathers
statistics, and generate a recap.csv
file in the stats
directory.
For instance, if you want to prove all chapters with statistics and Memcached, issue the following command:
make all-chapters STATS=1 MEMCACHED=1
Contributing
If you want to contribute to SPARK by Example, please read the CONTRIBUTING file.
References
Recommend
-
50
README.md SparkPHP framework
-
37
README.md Apache Spark Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, an...
-
38
README.md Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Pytho...
-
7
Basic Example for Spark Structured Streaming & Kafka Integration Reading Time: 2 minutesThe Spark Streaming integration for Kafka 0.10 is similar in design to the 0.8
-
3
Exploring Spark and ML.NET with F# This repo contains code shown in On .NET Live Show - Exploring Spark and ML.NET with F# Scenario Train and deploy a machine...
-
4
Spark – LDA : A Complete example of clustering algorithm for topic discovery. Reading Time: 6 minutesIn this blog we will...
-
2
Abstract In this article series, we’ll look at an example of query Pushdown when using the SingleStore Spark Connector. This first article will load some weather data into SingleStore using Databricks Community Edition (CE). T...
-
9
Abstract In the first part of this Pushdown series, we successfully loaded our weather data into a Spark Dataframe, checked the number of row...
-
1
spark-flow This is a library for organizing batch processing pipelines in Apache Spark and handling automatic checkpointing of intermediate results. The core type is a DC (Distributed Collection) which is analogous to a Spark Dataset...
-
1
Table of ContentsPrefaceRecently, Gunnar Morling made a stellar tweet about how he was interested in hearing...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK