<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Fri, Sep 22, 2017 at 12:06 AM, Richard Tran Mills <span dir="ltr"><<a href="mailto:rtmills@anl.gov" target="_blank">rtmills@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div>Thanks for sharing this, Barry. I haven't had time to read their paper, but it looks worth a read.<br><br></div>Hong, since many machine-learning or data-mining problems can be cast as linear algebra problems (several examples involving eigenproblems come to mind), I'm guessing that there must be several people using PETSc (with SLEPc, likely) in this this area, but I don't think I've come across any published examples. What have others seen?</div></div></blockquote><div><br></div><div><a href="http://epubs.siam.org/doi/abs/10.1137/S1052623400374379">http://epubs.siam.org/doi/abs/10.1137/S1052623400374379</a><br></div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Most of the machine learning and data-mining papers I read seem employ sequential algorithms or, at most, algorithms targeted at on-node parallelism only. With available data sets getting as large and easily available as they are, I'm surprised that there isn't more focus on doing things with distributed parallelism. One of my cited papers is on a distributed parallel k-means implementation I worked on some years ago: we didn't do anything especially clever with it, but today it is still one of the *only* parallel clustering publications I've seen.</div><div><br></div><div>I'd love to 1) hear about what other machine-learning or data-mining applications using PETSc that others have come across and 2) hear about applications in this area where people aren't using PETSc but it looks like they should!<br></div><div><br></div><div>Cheers,<br></div>Richard<br></div><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On Thu, Sep 21, 2017 at 12:51 PM, Zhang, Hong <span dir="ltr"><<a href="mailto:hongzhang@anl.gov" target="_blank">hongzhang@anl.gov</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">Great news! According to their papers, MLSVM works only in serial. I am not sure what is stopping them using PETSc in parallel.<br>
<br>
Btw, are there any other cases that use PETSc for machine learning?<br>
<br>
Hong (Mr.)<br>
<br>
> On Sep 21, 2017, at 1:02 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
><br></span><div><div class="gmail-h5">
> From: Ilya Safro <a href="mailto:isafro@g.clemson.edu" target="_blank">isafro@g.clemson.edu</a><br>
> Date: September 17, 2017<br>
> Subject: MLSVM 1.0, Multilevel Support Vector Machines<br>
><br>
> We are pleased to announce the release of MLSVM 1.0, a library of fast<br>
> multilevel algorithms for training nonlinear support vector machine<br>
> models on large-scale datasets. The library is developed as an<br>
> extension of PETSc to support, among other applications, the analysis<br>
> of datasets in scientific computing.<br>
><br>
> Highlights:<br>
> - The best quality/performance trade-off is achieved with algebraic<br>
> multigrid coarsening<br>
> - Tested on academic, industrial, and healthcare datasets<br>
> - Generates multiple models for each training<br>
> - Effective on imbalanced datasets<br>
><br>
> Download MLSVM at <a href="https://github.com/esadr/mlsvm" rel="noreferrer" target="_blank">https://github.com/esadr/mlsvm</a><br>
><br>
> Corresponding paper: Sadrfaridpour, Razzaghi and Safro "Engineering<br>
> multilevel support vector machines", 2017,<br>
> <a href="https://arxiv.org/pdf/1707.07657.pdf" rel="noreferrer" target="_blank">https://arxiv.org/pdf/1707.076<wbr>57.pdf</a><br>
><br>
<br>
</div></div></blockquote></div><br></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">http://www.caam.rice.edu/~mk51/</a><br></div></div></div>
</div></div>