1 / 8

Implemented two graph partition algorithms

Implemented two graph partition algorithms. 1. Kernighan/Lin Algorithm. Produce bi-partition to undirected weighted graph (Min-cut) ‏. Input: $network, a Clair::Network object. Usage: use Clair::Network::KernighanLin; my $KL = new KernighanLin($net);

anja
Download Presentation

Implemented two graph partition algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implemented two graph partition algorithms 1. Kernighan/Lin Algorithm Produce bi-partition to undirected weighted graph (Min-cut)‏ Input: $network, a Clair::Network object Usage: use Clair::Network::KernighanLin; my $KL = new KernighanLin($net); $graphPartition = $KL->generatePartition(); Output: $graphPartition, a hash object: $graphPartition == { $node1 => 0, $node2 => 1, .... };

  2. Implemented two graph partition algorithms 2. Girvan/Newman Algorithm produce hierarchical clustering (multiple partitions) for unweighted, undirected graph Input: $network, a Clair::Network object Usage: use Clair::Network::GirvanNewman; my $GN = new GirvanNewman$net); $graphPartition = $GN->partition(); Output: $graphPartition a hash object: $graphPartiton == { $node1 = 0|1|1|1|1|1|1|1|1|2|2|2|2|2|2|2|2|3|3|3|4|4|4|4|7|7|8|8|8|8|9|9|10|10|10|10|11|11|11|4|4|4|4|5|5|6|6|6|6|6|7|7|8|8|8|8|8|8|8|8|9|9|9|9|9|9|9|20|21, $node2 = 0|1|1|1|1|1|1|1|1|2|2|2|2|2|2|2|2|1|1|1|2|2|2|2|5|5|6|6|6|6|7|7|8|8|8|8|9|9|9|3|3|3|3|4|4|4|4|4|4|4|5|5|6|6|6|6|6|6|6|6|7|7|7|7|7|7|7|6|7, .... }

  3. Implemented a dependency parser produce non-projective, unlabeled dependency tree Input: the dow fell 22.6 % on black monday . DT NNP VBD CD NN IN JJ NNP . DEP NP-SBJ ROOT DEP NP PP DEP NP DEP 2 3 0 5 3 3 8 6 3 the sentence POS (manually added)‏ Dependency Label (for reference)‏ Usage: perl MSTParser.pl $inputData perl MSTParser.pl model $inputData dependency parent (for reference)‏ Output: out.txt the dow fell 22.6 % on black monday . DT NNP VBD CD NN IN JJ NNP . 2 3 0 5 3 3 8 6 3 dependency parent (parsed result)‏

  4. Result Slightly off: substitute 11.11 to <num>

  5. Some Observations 1. extract features related to the edge John saw NNP VBD <Head> NNP VBD NNP 2. Compare features with exist features and their weights John saw 0 NNP VBD 15 <Head> NNP VBD NNP 5 3. Add up the weights Weight(John -> saw) = 20 Most time consuming part: calculating the edge weights.

  6. Questions: 1. Do we really need a complete graph? 2. Do we really need so many features ? 1APT4=<root-POS> MID MID:-0.03632653020122016 1APT4=<root-POS> MID MID:-0.03632653020122016 1APT2=<root-POS> MID UH:0 1APT1=<root-POS> MID UH:0 1APT=<root-POS> MID MID UH:0 APT4=<root-POS> MID MID&RA&0:-0.03632653020122016 APT4=<root-POS> MID MID&RA&0:-0.03632653020122016 APT2=<root-POS> MID UH&RA&0:0 APT1=<root-POS> MID UH&RA&0:0 APT=<root-POS> MID MID UH&RA&0:0 X1PT4=STR < ,:-0.025903170059057394 X1PT4=STR < ,:-0.025903170059057394 X1PT2=STR < U:0 X1PT1=< U ,:0 X1PT=STR < U ,:0 XPT4=STR < ,&RA&0:0 XPT3=STR U ,&RA&0:0 XPT2=STR < U&RA&0:0 XPT1=< U ,&RA&0:0 XPT=STR < U ,&RA&0:0 1PT4=STR <root-POS> ,:-0.025903170059057394 1PT4=STR <root-POS> ,:-0.025903170059057394 1PT2=STR <root-POS> UH:0 1PT1=<root-POS> UH ,:0 1PT=STR <root-POS> UH ,:0 PT4=STR <root-POS> ,&RA&0:0 PT3=STR UH ,&RA&0:0 PT2=STR <root-POS> UH&RA&0:0 PT1=<root-POS> UH ,&RA&0:0 PT=STR <root-POS> UH ,&RA&0:0 X1APT4=< MID MID:-0.03632653020122016 X1APT4=< MID MID:-0.03632653020122016 X1APT2=< MID U:0 X1APT1=< MID U:0 X1APT=< MID MID U:0 XAPT4=< MID MID&RA&0:-0.03632653020122016 XAPT4=< MID MID&RA&0:-0.03632653020122016 XAPT2=< MID U&RA&0:0 XAPT1=< MID U&RA&0:0 XAPT=< MID MID U&RA&0:0

  7. 3. Why the sentence correction rate is so low? ! ! 4. Do we really need “online large margin training”? Maybe just feature frequency?

  8. Thank you!

More Related