<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Dear all,</p>
<p> I would just like to note that we also develop SVM
implementation. It is intended for large-scale datasets and makes
use of PETSc parallel linear algebra. Currently, it supports only
linear kernels - Hessian is, in fact, MATNORMAL with arbitrary
underlying data matrix - it is, e.g. possible to use MATDENSE or
MATAIJ depending on the problem. For the solution of the arising
quadratic program (QP), it uses solvers from our PermonQP package.
Both PermonSVM and PermonQP are libraries depending on PETSc. They
are written in the PETSc coding style, pretty much like SLEPc.<br>
<br>
<a class="moz-txt-link-freetext"
href="http://permon.it4i.cz/permonqp.htm">http://permon.it4i.cz/permonqp.htm</a><br>
<a class="moz-txt-link-freetext"
href="http://permon.it4i.cz/permonsvm.htm">http://permon.it4i.cz/permonsvm.htm</a><br>
<br>
<a class="moz-txt-link-freetext"
href="https://github.com/it4innovations/permon">https://github.com/it4innovations/permon</a><br>
<a class="moz-txt-link-freetext"
href="https://github.com/it4innovations/permonsvm">https://github.com/it4innovations/permonsvm</a><br>
<br>
So far, PermonQP only implements an Augmented Lagrangian type
algorithm which can be combined with any solver for
box-constrained QP. In PermonQP, there are some concrete ones and
also TAO wrapper. However, adding an Interior Point implementation
is interesting for us as well.<br>
<br>
PermonSVM is so far a proof-of-concept thing, but it already
scales pretty well (almost proportionally to the application of
the data matrix to a vector). See, e.g. our PASC poster <a
class="moz-txt-link-freetext"
href="https://www.researchgate.net/publication/318317204_PERMON_PASC17_Poster">https://www.researchgate.net/publication/318317204_PERMON_PASC17_Poster</a><br>
<br>
We'll be grateful for any feedback on this.<br>
<br>
Jakub</p>
<br>
<div class="moz-cite-prefix">On 22.9.2017 06:06, Richard Tran Mills
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAOseDjnG+=hvMhEBFWuA0wEN6t5CUBXHgoy9R+7RfCgOJ7m3nQ@mail.gmail.com">
<div dir="ltr">
<div>
<div>Thanks for sharing this, Barry. I haven't had time to
read their paper, but it looks worth a read.<br>
<br>
</div>
Hong, since many machine-learning or data-mining problems can
be cast as linear algebra problems (several examples involving
eigenproblems come to mind), I'm guessing that there must be
several people using PETSc (with SLEPc, likely) in this this
area, but I don't think I've come across any published
examples. What have others seen?</div>
<div><br>
</div>
<div>Most of the machine learning and data-mining papers I read
seem employ sequential algorithms or, at most, algorithms
targeted at on-node parallelism only. With available data sets
getting as large and easily available as they are, I'm
surprised that there isn't more focus on doing things with
distributed parallelism. One of my cited papers is on a
distributed parallel k-means implementation I worked on some
years ago: we didn't do anything especially clever with it,
but today it is still one of the *only* parallel clustering
publications I've seen.</div>
<div><br>
</div>
<div>I'd love to 1) hear about what other machine-learning or
data-mining applications using PETSc that others have come
across and 2) hear about applications in this area where
people aren't using PETSc but it looks like they should!<br>
</div>
<div><br>
</div>
<div>Cheers,<br>
</div>
Richard<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 21, 2017 at 12:51 PM,
Zhang, Hong <span dir="ltr"><<a
href="mailto:hongzhang@anl.gov" target="_blank"
moz-do-not-send="true">hongzhang@anl.gov</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Great
news! According to their papers, MLSVM works only in serial.
I am not sure what is stopping them using PETSc in parallel.<br>
<br>
Btw, are there any other cases that use PETSc for machine
learning?<br>
<br>
Hong (Mr.)<br>
<br>
> On Sep 21, 2017, at 1:02 PM, Barry Smith <<a
href="mailto:bsmith@mcs.anl.gov" moz-do-not-send="true">bsmith@mcs.anl.gov</a>>
wrote:<br>
><br>
><br>
> From: Ilya Safro <a href="mailto:isafro@g.clemson.edu"
moz-do-not-send="true">isafro@g.clemson.edu</a><br>
> Date: September 17, 2017<br>
> Subject: MLSVM 1.0, Multilevel Support Vector Machines<br>
><br>
> We are pleased to announce the release of MLSVM 1.0, a
library of fast<br>
> multilevel algorithms for training nonlinear support
vector machine<br>
> models on large-scale datasets. The library is
developed as an<br>
> extension of PETSc to support, among other
applications, the analysis<br>
> of datasets in scientific computing.<br>
><br>
> Highlights:<br>
> - The best quality/performance trade-off is achieved
with algebraic<br>
> multigrid coarsening<br>
> - Tested on academic, industrial, and healthcare
datasets<br>
> - Generates multiple models for each training<br>
> - Effective on imbalanced datasets<br>
><br>
> Download MLSVM at <a
href="https://github.com/esadr/mlsvm" rel="noreferrer"
target="_blank" moz-do-not-send="true">https://github.com/esadr/mlsvm</a><br>
><br>
> Corresponding paper: Sadrfaridpour, Razzaghi and Safro
"Engineering<br>
> multilevel support vector machines", 2017,<br>
> <a href="https://arxiv.org/pdf/1707.07657.pdf"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://arxiv.org/pdf/1707.<wbr>07657.pdf</a><br>
><br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>