Nanopublication: RAszCcnWVT

Full identifier: https://w3id.org/np/RAszCcnWVTYomkTM0J8K9p0PSp-gwfAIP8oDv_sfGKsdQ

Raw formats: TriG(html,txt), JSON-LD(txt), N-Quads(txt), XML(txt)

Checking for updates...

 CoSMO Semantic Post SemanticPost comment approve/disapprove edit as derived nanopublication

This is the identifier for the assertion of this nanopublication. https://w3id.org/np/RAszCcnWVT...#assertion this assertion http://www.w3.org/2000/01/rdf-schema#comment comment (this is a literal) " You only learn a few parameters, with your parameter "efficient" finetuning. The rest isπŸ’© A whole line of works🧡 shows that by throwing redundancy we can get better LoRas, keep less memory and of course model merge https://twitter.com/LChoshen/status/1833879920348422216/photo/1 ComPeft shows you can improve LoRAs by pruning aggressively and making the remaining weights binary (+/-) It also means parameter efficiency still relies on overparametrization(but only during training) https://x.com/prateeky2806/status/1727589818618523783 Laser shows it on full models https://x.com/pratyusha_PS/status/1739025292805468212 https://twitter.com/LChoshen/status/1833879922500080084/photo/1 In merging, many find that with only those few weights one can make a "multitask" model, keeping the important ones for each model and switching. those e.g. 1% of the weights also represent tasks well Many.. https://www.alphaxiv.org/abs/2408.13656 https://www.alphaxiv.org/pdf/2405.07813 https://www.alphaxiv.org/pdf/2310.01886 Those works are focused on efficient multitask learning that compresses the models, can keep many models and switch between them as necessary. Another option to compress is to SVD the LORA, separately or to a shared space, saving the tiny differences https://x.com/RickardGabriels/status/1810368375455207470 And just because we discussed compression, of course this is all just "model compression" if you want to compress to just save space, there are smarter ways: https://github.com/zipnn/zipnn " .
This is the identifier for the assertion of this nanopublication. https://w3id.org/np/RAszCcnWVT...#assertion this assertion https://schema.org/keywords keywords (this is a literal) "finetuning" .
This is the identifier for the assertion of this nanopublication. https://w3id.org/np/RAszCcnWVT...#assertion this assertion https://schema.org/keywords keywords (this is a literal) "low-rank-adapters" .
This is the identifier for the assertion of this nanopublication. https://w3id.org/np/RAszCcnWVT...#assertion this assertion https://schema.org/keywords keywords (this is a literal) "model-compression" .
This is the identifier for the assertion of this nanopublication. https://w3id.org/np/RAszCcnWVT...#assertion this assertion https://schema.org/keywords keywords (this is a literal) "multitask-learning" .
This is the identifier for the assertion of this nanopublication. https://w3id.org/np/RAszCcnWVT...#assertion this assertion https://schema.org/keywords keywords (this is a literal) "serving-systems" .
This is the identifier for this whole nanopublication. https://w3id.org/np/RAszCcnWVT... This nanopublication date and time when the nanopublication was created http://purl.org/dc/terms/created was created on (this is a literal) "2024-09-11T18:07:24.837Z" .
This is a local identifier minted within the nanopublication. https://w3id.org/np/RAszCcnWVT...#sig sig http://purl.org/nanopub/x/hasSignatureTarget has as target This is the identifier for this whole nanopublication. https://w3id.org/np/RAszCcnWVT... this nanopublication .
This is a local identifier minted within the nanopublication. https://w3id.org/np/RAszCcnWVT...#sig sig http://purl.org/nanopub/x/hasAlgorithm has the algorithm (this is a literal) "RSA" .
show references