Information Processing Letters: Thorsten Ehlers
Information Processing Letters: Thorsten Ehlers
a r t i c l e i n f o a b s t r a c t
Article history: We present a new sorting network on 24 channels, which uses only 12 layers, improving
Received 16 March 2016 the previously best known bound by one layer. By monotonicity, this also implies improved
Received in revised form 17 August 2016 sorting networks for 23 channels. This result was obtained by combining techniques for
Accepted 17 August 2016
generating prefixes of sorting networks with propositional encodings.
Available online 20 September 2016
Communicated by M. Chrobak
© 2016 Elsevier B.V. All rights reserved.
Keywords:
Parallel processing
Sorting networks
Theory of computation
1. Introduction
http://dx.doi.org/10.1016/j.ipl.2016.08.005
0020-0190/© 2016 Elsevier B.V. All rights reserved.
18 T. Ehlers / Information Processing Letters 118 (2017) 17–20
Table 1
Optimal depth (dn ) of sorting networks on n inputs, for n ≤ 12.
n 1 2 3 4 5 6 7 8 9 10 11 12
dn 0 1 3 3 5 5 6 6 7 7 8 8
Table 2
Best known values and bounds on optimal depth (dn ) of sorting networks
on n inputs, for 13 ≤ n ≤ 24. The contributions of this paper are shown in
boldface. Note that the new result for n = 24 also implies one for n = 23:
Removing one channel, and all connected comparators, yields a sorting
network on 23 channels.
n 13 14 15 16 17 18 19 20 21 22 23 24
11 11 11 12 12 12 12
dn 9 9 9 9 10 Fig. 2. Prefix of a sorting network on 12 channels, and 5 layers.
10 10 10 10 10 10 10
bounded by O (n log(n)), as sorting networks sort based on depth. Here, one prefix p 1 is considered superior to an-
comparisons. Optimal sorting networks are known only for other prefix p 2 if every sorting network beginning with p 2
at most 10 inputs [5], some upper bounds can be found can be transformed into one beginning with p 1 . In [11],
in [6]. Ehlers and Müller handcrafted prefixes, mainly based on
This paper focusses on the depth of sorting networks so-called green filters [14] and used a SAT solver to extend
rather than their size. In 1973, Knuth summarised upper these to a full sorting network.
bounds on the depth of sorting networks on n ≤ 16 chan- All these approaches are somewhat limited: Handcraft-
nels [7], cf. Table 1 and 2. In 1989, Parberry used a SAT- ing sorting networks is limited by the ability of a hu-
based approach together with a symmetry break on the man to understand sorting networks. Current SAT-based
first layer to prove that the bounds for n ≤ 10 are opti- approaches do not scale well, and generating all prefixes
mal [8]. This was pushed further by Bundala and Závodný yields huge sets to test, even when symmetries are con-
in 2014 [9]. Using a decomposition and symmetry breaking sidered [5]. We therefore used a combination of these ap-
approach for the first two layers combined with SAT solv- proaches. First, we generate the prefix of a sorting network
ing, they were able to prove that the bounds for n ≤ 16 are on 12 channels which almost sorts its inputs. As it seems
optimal. Al-Baddar and Batcher developed a tool to analyse intractable to consider all such prefixes, we use a greedy
prefixes, i.e. some layers on the left hand side of a sort- approach: given some prefixes on k layers, we generate all
ing network, which allowed them to hand-craft improved Pareto-optimal prefixes on k + 1 layers up to symmetries,
sorting networks for 18 and 22 channels [10]. Ehlers and and keep only the 32 offsprings which yield a minimum
Müller used a SAT solver to extend handcrafted prefixes, number of outputs. Iterating this process gives the prefix
and found faster sorting networks for 17, 19 and 20 chan- on 5 layers shown in Fig. 2, which has 34 different output
nels [11]. vectors.
The SAT encodings used were not strong enough to Next, we create a prefix on 24 channels consisting of
prove optimality for any new case. Codish et al. intro- two prefixes on 12 channels, and add two comparators to
duced symmetry breaks for the last layers of a sorting net- their last layer which connect unused channels. This gives
work [12]. Ehlers and Müller suggested an improved SAT a total of 1, 129 outputs which remain to be sorted by the
encoding, and re-ordered the channels of sorting networks, remainder of the network, which is a tractable size for a
which can be used to reduce the number of variables in SAT solver. This prefix was permuted to minimise the SAT
the SAT encoding, allowing to prove that 10 layers are op- encoding used lateron. The formula to solve has 56, 949
timal when sorting 17 inputs [13]. variables and 1, 164, 158 clauses, and can be solved by
The purpose of this paper is to show this combination MiniSAT [15] in less than 7 hours on an Intel i7-4770HQ
of techniques, and the presentation of better sorting net- CPU.
works on 24 channels. For details on the propositional en- The generated sorting network is presented in Fig. 3.
coding of sorting networks we refer to [9,13]. Techniques Due to the permutation of the channels, it is hard to un-
to generate sets of prefixes up to symmetries can be found derstand its structure. Therefore, we present an alternative
in [9,5]. version in Fig. 4. Here, we permuted the channels such
that the original prefix was restored, and removed redun-
3. Construction of new sorting networks dant comparators. In this presentation, the structure in the
first 5 layers becomes visible again. Interestingly, the 6th
The sorting networks suggested by Batcher can be con- layer, which was generated by the SAT solver, is very simi-
structed algorithmically [3], but they are not optimal for lar to the first layers of a merge step in Batcher’s construc-
n > 8 channels. The sorting networks for n > 8 shown tion [2].
in [2] are handcrafted, and can be used as base cases for The sorting network in Fig. 4 has 125 comparators. Al-
merging based algorithms. though improving the upper bound on the depth, it does
Bundala and Závodný generated sets of Pareto-optimal not improve the upper bounds on the size of sorting net-
prefixes on 2 layers and checked, using a SAT solver, which works, as networks with 123 and 118 comparators for 24
of these can be extended to a sorting network of some and 23 inputs, respectively, are known [6].
T. Ehlers / Information Processing Letters 118 (2017) 17–20 19
Fig. 3. Sorting Network on 24 channels. The channels are permuted to create an easier SAT formula.
Fig. 4. The same sorting network after undoing the permutation, and removing redundant comparators.
[13] T. Ehlers, M. Müller, New bounds on optimal sorting networks, in: ary Computation Conf., GECCO ’12, Philadelphia, PA, USA, July 7–11,
A. Beckmann, V. Mitrana, M.I. Soskova (Eds.), Evolving Computability 2012, ACM, 2012, pp. 593–600.
– 11th Conf. on Computability in Europe, CiE 2015, Bucharest, Roma- [15] N. Eén, N. Sörensson, An extensible sat-solver, in: E. Giunchiglia,
nia, June 29–July 3, 2015, Proc., in: LNCS, vol. 9136, Springer, 2015, A. Tacchella (Eds.), Theory and Applications of Satisfiability Testing,
pp. 167–176. 6th Int. Conf., SAT 2003, Santa Margherita Ligure, Italy, May 5–8,
[14] D. Coles, Efficient filters for the simulated evolution of small sort- 2003, Selected Revised Papers, in: LNCS, vol. 2919, Springer, 2003,
ing networks, in: T. Soule, J.H. Moore (Eds.), Genetic and Evolution- pp. 502–518.