User talk



Salut Yug,
Petit rappel que pour faire des tests, il y a l'instance de développement sur laquelle tu peux faire tous les essais que tu veux, à n'appliquer ici qu'une fois que c'est fin prèt. Ça évitera à des gens de tomber sur des trucs bizarre en chargeant le site au mauvais moment ;).
Bisous — 0x010C ~talk~ 15:45, 26 December 2018 (UTC)
PS : et ne laisse pas traîner des pages inutiles (MediaWiki:Recordwizard) ; en l'occurence ça bloque notamment d'éventuelles futur changements et traductions poussés par translatewiki.

Piouf! T'es vraiment tombé dessus !!! *0* !!! J'ai fais aussi vite que possible !!!! Merci pour ta vigilance, tout me semblait bien restoré, sauf si j'étais aveuglé par du cache !... Ah! v2 c'est vrai ! Tu peux me mettre admin ??? je voudrais tester le support d'images dans ! Yug (talk) 20:09, 26 December 2018 (UTC)
PS: Je ne comprends pas bien ces questions / pages / balises de traductions... Yug (talk) 20:10, 26 December 2018 (UTC)
Je t'ai envoyé un mail avec le mot de passe du compte de test admin. — 0x010C ~talk~ 21:59, 26 December 2018 (UTC)


Ton intro et l'historique sont bien sur LinguaLibre:About, mais j'ai viré ce qui n'a rien à faire sur cette page (histoire que ça soit un minimum pro / propre / efficace). Tu peux retrouver le contenu sur la version historisé. — 0x010C ~talk~ 21:59, 26 December 2018 (UTC)

Ok, cool! J'ai recupéré ca de github, j'y fais du nettoyage. J'ai pas encore décider ou mettre ces truc sur LinguaLibre... Je te tiens au jus.
Sur github, le repository listes est supprimable ! Yug (talk) 18:08, 28 December 2018 (UTC)

Ongoing work

Get your data :

$git clone

Then, save into the following in the root of your directory:

# RUN:
# make -f iso2=pl iso3=pol processing    # to do the work
# make -f iso2=pl iso3=pol all           # to do the work AND print few messages

all: processing messages
	sed -E 's/ [0-9]+$$//g' $(iso2)_50k.txt | sed -E 's/^/# /g' > $(iso2)-words-LL.txt
	split -d -l 2000  --additional-suffix=".txt" $(iso2)-words-LL.txt "$(iso3)-words-by-frequency-" 

	head -n 5 $(iso2)_50k.txt
	head -n 5 $(iso2)-words-LL.txt
	head -n 5 "$(iso3)-words-by-frequency-00.txt"
	head -n 5 "$(iso3)-words-by-frequency-01.txt"
	wc -l $(iso2)-words-LL.txt
	wc -l "$(iso3)-words-by-frequency-01.txt"

Then, find your {iso2}_50k.txt file. Put both in the same folder, and run the command below with your needed iso2 and iso3 values :

make -f iso2=pl iso3=pol processing

Yug (talk) 18:17, 28 December 2018 (UTC)


iconv -f "GB18030" -t "UTF-8" SUBTLEX-CH-WF.csv -o $iso2-words.txt
sed -E 's/(,[0-9]+.?[0-9]*)+//g' $iso2-words.txt | tail -n+4 | head -n 20000 | sed -E 's/^/# /g' > $iso2-words-LL.txt
split -d -l 2000  --additional-suffix=".txt" $iso2-words-LL.txt "$iso3-words-by-frequency-"

Feature idea : table tacking existing languages on


I have difficulties to keep track all the languages I helped to add to LinguaLibre. Taiwan has 16 languages and 42 locals variations. Maybe it already exists... If not, It would be a positive have a sortable table such as below :

Wikidata qid LinguaLibre qid English name Language group Active ? Numb. or recordings
Q715766 Q51302 Atayal Taiwanese Low 4
Q718269 Q51871 Sakizaya Taiwanese Low 6
... ... .... ... ... ...

Yug (talk) 12:16, 31 December 2018 (UTC)

I'am finding out how LinguaLibre:Stats is coded, maybe I will be able to produce something :D Yug (talk) 12:59, 31 December 2018 (UTC)

WP query and frequency

Helper :

curl '' | tr '\040' '\012' | sort | uniq -c | sort -k 1,1 -n -r > output.txt


Clean up corpus

find -iname 'fra-opus-100k.txt' -exec cat {} \; | grep -o '\w*' | sed -e "s/^c$/ce/gi" -e "s/^d$/du/gi" -e "s/^j$/je/gi" -e "s/^m$/moi/gi" -e "s/^n$/ne/gi" -e "s/^qu$/que/gi" -e "s/^s$/se/gi" -e "s/^t$/toi/gi" -e "/^[0-9]*$/d" | awk '/^[[:upper:]][^[:upper:]]*$/{$1=tolower(substr($1,1,1)) substr($1,2)}1' | awk '{a[$1]++}END{for(k in a)print a[k],k}' | sort -n -r -t' ' -k1,1 > fra-subtlex.txt

cat userlist.txt | sed -e 's/^# //g' | tr "\n" "\|" | tr "\^" "\(" | sed -e "s/^/\\\(/" -e "s/|$/\\\)/g" 

Lingua Libre Story for September 2020

This is not an official story or newsletter. This is an attempt by a user to share some updates about the program. There might be more stories which I have missed.

September 2020 was an eventful month and we have seen a lot of activities of uploading new content and also around project-related discussion. Here are some of the best stories from September 2020.

  • 300,000 files: On 10 September 2020 we completed 300,000 pronunciation uploads. After the launch in August 2018, the first 100,000 files were uploaded in April 2019, and the milestone of 200,000 files was reached on January 2020. As of 30 September 2020 there are 366 speakers at this project working in 92 languages.
  • Maximum number of pronunciations in a month: In September 2020, 23,209 files were uploaded. This is the maximum number of files uploaded ever in a particular calendar month (earlier it was 22,963 files in June 2020, and 22,293 files in May 2019).
  • Indian language in top 3 list: This month Bengali language came into the top three languages by the number of files uploaded using Lingua Libre. This is possibly the first time a non-European/Indian language came into the top three most-uploaded languages on the project.
  • Project chat: Several discussion started on the Chat room, such as Bug testing (you may help), Technical preparations etc.

That's it. Have a good time. --টিটো দত্ত (Titodutta) (কথা) 13:35, 1 October 2020 (UTC)
This post is under CC0 license, feel to free share with anyone, anywhere, without any restriction

Re: Speeding bug

Bonjour/নমস্কার, I have been following the thread for sometime. However I did not understand it. It describes the bug as "nasty", do you mean, after recording the audio speed is becoming much faster? Yes, that happens. That's speeding up. If you mean while previewing the audio suddenly stops in between and try to load, that also happens, that's speeding down (and thankfully does not change the end result, so you can upload). Most probably you are talking about speeding up. I can add my opinion/view (and possibly ask you another question also). Regards. --টিটো দত্ত (Titodutta) (কথা) 02:03, 16 October 2020 (UTC)

  • For some users, when they record a 2 seconds words, the audio produced is 0.3 second long : a speeded up version of what we want. Open and listen this one : Q338365.
  • Why nasty ? Because some people may record 5000 audios and 5% of the set is corrupt. This makes the whole dataset untrustworthy and unusable.
  • As for time management we would go faster deleting the whole 2000 recordings and ask the speaker to record again. But since we work with volunteer, coming to them to tell them "hey, our web app failed, can you re-contribute the exact same thing for free again ?" is quite embarrassing for everyone to discuss.
  • Half luck is... this bug appears on whole session. By example, on Luilui6666 uploads that day, only the 126 uploads around 04:5* am are corrupted. All those uploaded at 04:29 are fine. (Don't know why)
Yug (talk) 09:14, 17 October 2020 (UTC)
Oh, that's kind of common. I do not record so many words at a time, and double-check (speaker & earphone) before uploading (that's why it is actually a lot more time, and my upload rate is pretty slow). Anyway, this is not an uncommon thing. The question I wanted to ask you: do you face this error for any given list? I face it sometimes, let me explain suppose I uploaded 12, 20, 15, 18 words, and while I am doing the fifth list/batch, the speeding issue starts. Generally with 1–2 words out of 15–20 words. Next time, it will be more and almost 50% words will be corrupt. The easy solution I do is: reload the page and start the Record wizard from the beginning. That helps. Anyway, if anyone wants to test (the results on the Project chat page) and finds there is no error, that does not definitely mean there is no error. In my observation, the error happens after a certain time/number of uploads. That's my thought on this so far.
The example you have given, yes that's the problem. However it is suggested to check every audio before uploading, perhaps. --টিটো দত্ত (Titodutta) (কথা) 09:32, 17 October 2020 (UTC)
Plus I do not try with so many words, my highest has been 140 or so words, that too I try very rarely. But yes, you are right, if it is a big list of a few hundred words, the error may start anytime, and that's problematic. --টিটো দত্ত (Titodutta) (কথা) 09:35, 17 October 2020 (UTC)
So you are affected as well. Please copy or move your report above into LinguaLibre:Chat_room#Speeding-up_bug_:_call_for_testers. We collect observed behaviors and brainstorm there. It's important to know how many people and which systems are affected. Yug (talk) 09:47, 17 October 2020 (UTC)
Absolutely, I face this issue as well. The problem is I can not replicate it and do not know when this error is going to come. Mostly it is uploading after 50/100/any number of words. That's the reason (not able to replicate) I did not post there so far. Let me know if you think I should post this there. I am using Firefox 81.0.2 (64 bits), Windows 10, I am using an external microphone. Regards. --টিটো দত্ত (Titodutta) (কথা) 10:00, 17 October 2020 (UTC)
@Titodutta: Add your system info to the table. In the comment column, specify shortly that it's occasional. Then, in the discussion, paste what you wrote me above, the long explainer of what you see. Maybe someone will think about something. Following Luilui's corrupted set (126, long sentences) I suspect longer sentences to create a saturation effect. But we need more testimonies. Yug (talk) 10:28, 17 October 2020 (UTC)

Project:Otherworldly languages


This table maps referents users to contact according to various needed.

Referents, roles, users-rights.
Username & email (Wiki) Github (Dev) Phabricator (Tasks) Social web (Communications) Community contact of...
0x010C Administrator
Owner (admin) ? tw:@LingLibre_WMFr
Jixitris User Owner (admin)/Dev
Yug Administrator Owner (admin)/Organisation Member
Xenophôn Administrator
Owner (admin) Wikimedia France
MickeyBarber User Owner (admin) Wikimedia France
Lyokoï Administrator Member tw:@LingLibre_WMFr
Pamputt Administrator
Member Member
Poslovitch User Owner (admin) Eastern France's academics
Adélaïde Calais WMFr User Owner (admin) Wikimedia France
Tshrinivasan User Member India
Eavqwiki Administrator Member Paris academics
WikiLucas00 Administrator Member
[[User:|User:]] User Member
Bold: most active user(s), referent one.

User essays

I have two user essays, did you see these? These will tell about the workflow I and we use for Bangla (Bengali)

Let me know if you find these interesting/worth-sharing. --টিটো দত্ত (Titodutta) (কথা) 00:06, 3 February 2021 (UTC)

Hello user:Titodutta, I'am quite busy at the moment with Github clean up, coordinations, dev & fix. In addition to work and family life. I plan to turn back to LinguaLibre's community here next week and then check up your resources. But as a rules of thumb on wiki, better to go too fast than slow yourself down due to others ;) Yug (talk) 13:33, 4 February 2021 (UTC)


That would be perfect. Thank you for submitting this request for me! Poemat (talk) 00:52, 18 February 2021 (UTC)


Thank you! This limitation is stupid and counterproductive. I understand it's a Commons policy, and not Lingua Libre invention, but as long as it is in force, you could at least add a counter or a warning to the app, to tell people they are approaching their limit. It can be very disappointing for new users. Olaf (talk) 01:16, 18 February 2021 (UTC)

Yes, it has been a sneaky things to discover. Earliest users are also heavy Commonist and generally already have autopatrol rights, so so spent 2 years without noticing, precisely because it only affects new users... We are on it and will do our best. Yug (talk) 01:40, 18 February 2021 (UTC)
Regarding the bot - my bot is in Java, and it's a big complicated machine, which I have been developing for 12 years. Adding all the new pronunciation recordings the night after they appeared in Commons is just one of its functions. I'm afraid the different programming language and the focus on Polish Wiktionary makes my bot incompatible with your production. However, I can see another area where I could probably help. Each time I start the recording, I run my code to produce a list of words to record. I can see LL has a list feature but at least for the Polish language the lists are useless (sorry) - they contain inflected forms and words that have been already recorded by others. My list consists only of lemmas that have no recordings in Commons (both from Lingua Libre and in the old format), and it is sorted in descending frequency order. My bot could maintain similar lists for the most popular languages, updating them every night. It wouldn't be a problem, I've been maintaining similar frequency lists for years to help Polish Wiktionary editors create the Wiktionary articles about the most frequently used words in various languages, so most of the code is already done. The only things I would need are a way to upload the lists to the LL system, and a bit of configuration to make them possible to select in your record dialog. What do you think about this idea? Olaf (talk) 17:47, 19 February 2021 (UTC)
p.s. If you want to take a look at my production, here is a list of French lemmas that have the biggest number of French sections in various language editions of Wiktionary, but have no French section in Polish Wiktionary:
I maintain similar lists for about 60 other languages, updated nightly:
For my own needs I generate a similar list of Polish words with no recordings in Commons. This is what I could offer for many languages. I additionally split the Polish list to separate words with the /r/ consonants and the rest, because I can't speak /r/ properly, so the words with /r/ are for my wife (Poemat), but I can do the split in Polish only. Olaf (talk) 18:08, 19 February 2021 (UTC)
@Olaf: Hello, and thank you for this bot summary. It's interesting to discover other bots helping in unexpected ways. FYI, Poslovitch is now managing the LinguaLibre Bot. He got access-rights recently, has Python abilities and is studying the bot.
Poslovitch and myself are currently focus on Wimedia India's Wiki Meet online conference. I myself contributed an bit too much recently and need a light wikibreak in coming week(s). We should keep in mind your bot and its approach. Maybe migrate this conversation to LinguaLibre:Technical board ? Yug (talk) 15:11, 20 February 2021 (UTC)

LiLi video is up

Your session was great. The video is here m:Wikimedia_Wikimeet_India_2021/Program. --টিটো দত্ত (Titodutta) (কথা) 20:28, 23 February 2021 (UTC)


Moved to LinguaLibre:Technical board --Yug (talk) 12:09, 25 February 2021 (UTC)

Would you be so kind and delete the following redirects:

I have changed the capitalization a little bit in the code, so the lists maintained by the bot have names starting with "Lemmas" instead of "lemmas". The old versions are not needed. Sorry for adding more work. BTW, I found no template for Speedy Delete, I guess, there was no need for it on this wiki? Olaf (talk) 00:40, 27 February 2021 (UTC)

The bot is written in Java. It's a big ugly blob of many functions accumulated in the last 12 years. And the list generation for LinguaLibre is not independent of the rest, because the same scanning of Wiktionaries is used in list generation for Polish Wiktionary, and Commons category scanning is used to add audio files to Pl-wikt. So it would be rather hard to move to lingua-libre repo unless I split it in half. Olaf (talk) 12:27, 1 March 2021 (UTC)

There are 72 "Lemmas-" lists, but last night I added three new (example). However they are generated only for the Polish language from statistics of links between words in Polish Wiktionary - every word in a Wiktionary definition or an example is linked to its lemma there (like here), so it was possible and produced very good results. I use just these lists when recording for LiLi. But I'm afraid it's not possible to generate a good list in this way for any other language.
I'm also thinking about generating lists of geographical names using Wikipedia and Wikidata, but it's for the future.
I believe your idea of importing Unilex lists is very good. :-) Olaf (talk) 01:09, 4 March 2021 (UTC)
Hey, Olaf, thanks for the support. I'am brain-exhausted but the bot is ready and I see we are attacking our bottlenecks from several sides. It's pretty nice to witness. I made a test run with List:Ita/M…. That gives an idea. Anyway, Thank you :) Yug (talk) 22:57, 4 March 2021 (UTC)
Ok, I understand the reasons for not using the Lemmas list. However I don't quite understand, what exactly am I supposed to do. Do you want me to generate the Marathi first-letter lists with this script and upload it with the bot? Should I remove the recorded words from them? Once? Every night? I'm not sure what you expect. In fact, if they divide the work among them, there will be no words recorded by other people, so perhaps the current option in the Record Wizard for removing one's own recordings is enough here? Olaf (talk) 01:47, 6 March 2021 (UTC)

Re: Lists

Wielkie dzięki! (translate) I should have thought about it(clarification needed) before advertising in the Chat Room, but it may be useful in the future. Do you think an automatically updated version of the Unilex lists with recordings removed would be also useful? Olaf (talk) 19:09, 5 March 2021 (UTC)

@Olaf: We are focus on code these days, I do it too : forgetting the outreach. But this List / coding sprint are landing. I still need a more serious pause on my side. Now that we have lists, better and wider outreach is clearly the next big thing on lingualibre.
I think your lists should focus on wikt needs : the missing audios. On my side, I want to provide a stable track for users such as Titodutta, WikiLucas, Poslovitch : who came and slowly recorded 10,000+ items. This will provide clean data for outside (e-)dictionaries.
I think I may delay the UNILEX import a bit, and in any case I won't import the composite IETL languages yet. I'am lightly overworking these past month so I need a break away for the wikis. Yug (talk) 19:40, 5 March 2021 (UTC)

@Olaf: Hello again,
I'am very cautious about my imports given the size of it. Right now I'am back on naming conventions and would like to exchange with you. I identified the following elements:

  • List:{Iso} : defines the language.
  • {purpose} : Most common, Missing wikt, letters, Swadesh
  • {source} : unilex, subtlex, etc.
  • {id}: defining the list within its series.
    • {range} : such as 1-1000, 00001-01000 and 00001_to_01000.
    • {number} : from 1 then goes up.
  • various separators : _, -, , , _to_.

I tried a bunch of combinations, with different order, separators and capitalization (UNILEX, Unilex, unilex). It also needs to keep in mind:

  • Title must be simple English.
  • Title must ease identification and loading. (So all lists cannot starts by "Words")
  • Title must ease search and maintenance, possibly by script.

For {id} I realize the range was a developer-centered need. Better to use numbers.
On {source} can be required to discriminate between similar-purposes lists. Or should we have a policy : only one list serie by type of approach ? Only one frequency list per language, EITHER Unilex OR Subtlex OR HermiteDave ?
I now reduced the formats to the following (ignore separators):

Schema Example Comment
List:${Iso}/{purpose},_{source}_{id} List:${Iso}/Words_most_used,_UNILEX_1 Minimalist, purpose first.
List:${Iso}/{purpose}-${id} List:${Iso}/Common_words_1
Minimalist no source. May be troublesome for later mass maintenance.
List:${Iso}/{source}-{purpose}_{id} List:${Iso}/UNILEX-Common_words_1
Minimalist, simple English 1.
Minimalist, simple English 2.
Minimalist letter.
List:${Iso}/{source}-{purpose}_{id} List:${Iso}/Unilex-Common_words_1
Source not capitalized.
List:${Iso}/{source}-{purpose}_{id} List:${Iso}/UNILEX-Words_sorted_by_frequency_1 Explanatory title.
List:${Iso}/{purpose} List:${Iso}/Lemmas-without-audio-sorted-by-number-of-wiktionaries Only one list so no {id}. {purpose} also expose the source so no independent {source}, otherwise duplicate. Explanatory title. Directed to Wiktionary public so "Lemmas" is fine.

I want to think it twice because the format we use in coming days will likely become a de facto gentle recommendation. Any idea and comment on these ?Yug (talk) 08:15, 7 March 2021 (UTC)

Random thoughts about it:
  1. Lists are displayed in the Record Wizard in alphabetical order. If you put 10 frequency or Unilex lists at the same time, nothing else from this point down the alphabet is likely to be visible for a newcomer. (For example, in Marathi, there is no chance to see any new list, because the directory is bloated with lists having names starting with B. But Marathi is a special case, I understand). So I think uploading initially just one list of each type for each language is a good idea. There is always time to upload more when somebody has taken the bait.
  2. I wonder if simple English is always the best solution. Perhaps it would be better to name the lists in the local language if possible - not every local speaker must be good at English. I know, it's hard or impossible with so many languages.
  3. How do you solve the license problem for the Unilex lists? I believe the license should be visible somewhere, but currently, the Record Wizard has no such capability?
  4. Maybe we should put a special sign at the beginning of the name of each bot-created list, to make the lists always visible at the top of the directory for the newbies? Without it, the lists can be easily lost among old obsolete lists created by users.
  5. Perhaps the old user-created lists should be at some point removed when no activity takes place for a long time?
  6. Maybe there should be a marker, that the list is automatically updated? But the name visible in the Record Wizard is probably too short.
  7. Maybe there should be a separate directory in the Record Wizard for the user-created lists, and the "official lists" (perhaps they could have a better name). I mean, in some languages there are a lot of old lists, and a newcomer may have no idea what to start with.
  8. Perhaps the Record Wizard could show the list directory as a tree, or in a form similar to a file system? Then we could have as many lists of each kind as we want, and all the remarks above would be negligible.
Olaf (talk) 20:10, 7 March 2021 (UTC)
All this discussion make me think that the list system should be highly improved. I will open a new ticket on Phabricator to keep track of all your propositions. As a feature request, the new list system may be developed in the future. Pamputt (talk) 20:55, 7 March 2021 (UTC)
Yes. I was not needed when we barely had lists. Things have changed. The minimal would be to 1) Agree on a set of rules for lists maintenance, so we can merge the small ones (see List:Mar/) ; 2) Be able to have sections within List page, and in the loading, to load a serie (ex: List:Mar/UNILEX), then a section (ex: List:Mar/UNILEX#2).
Also, we need developers. We need to get out of Lili and do outreach.
Olaf, thanks for your input. I agree with your concerns (will come back to it in few days due to an IRL deadline). As for license I add {{UNILEX license}} on the talkpage. Yug (talk) 21:06, 7 March 2021 (UTC)
Also : Mar list creations, we need better communications. Yug (talk) 09:24, 8 March 2021 (UTC)