Wiktionary
siwiktionary
https://si.wiktionary.org/wiki/%E0%B7%80%E0%B7%92%E0%B6%9A%E0%B7%8A%E0%B7%82%E0%B6%B1%E0%B6%BB%E0%B7%92:%E0%B6%B8%E0%B7%94%E0%B6%BD%E0%B7%8A_%E0%B6%B4%E0%B7%92%E0%B6%A7%E0%B7%94%E0%B7%80
MediaWiki 1.44.0-wmf.4
case-sensitive
මාධ්යය
විශේෂ
සාකච්ඡාව
පරිශීලක
පරිශීලක සාකච්ඡාව
වික්ෂනරි
වික්ෂනරි සාකච්ඡාව
ගොනුව
ගොනුව සාකච්ඡාව
මාධ්යවිකි
මාධ්යවිකි සාකච්ඡාව
සැකිල්ල
සැකිලි සාකච්ඡාව
උදවු
උදවු සාකච්ඡාව
ප්රවර්ගය
ප්රවර්ග සාකච්ඡාව
TimedText
TimedText talk
Module
Module talk
පරිශීලක:Lee
2
1875
193419
186103
2024-11-21T09:43:29Z
Lee
19
/* Links */
193419
wikitext
text/x-wiki
En Taro Adun!
{{බාබෙල්|footer= |si|en-3}}
==විශේෂ==
* {{tl|ඉංග්රීසි ව්යාපෘතියෙන් ආයාත කළ පිටුව}}
* {{tl|ව්යාජ නාමාවකාශය}}
* {{tl|නාමාවකාශය}}
==Editing==
* [[Module:sinhala]]
* [[Module:sinhala project]]
* [[Module:අභිධානය]]
* {{tl|ඉංග්රීසි පදය සිංහලට}}
* {{tl|ඉංග්රීසි පදය සිංහල බහු වචනයට}}
* {{tl|නැවත වෙනස් කළ යුතු සැකිලි}}
* {{tl|නිවැරදි කළ යුතු උපදෙස් පිටු}}
* {{tl|උපදෙස් උප පිටුව}}
==Links==
* {{clc|මොඩියුල දෝෂ සහිත පිටු}}
* {{clc|ප්රවර්ග පද්ධතිය තුළ අර්ථ දක්වා නොමැති ප්රවර්ග}}
* {{clc|දෝෂ සහගත නාම සහිත ප්රවර්ග}}
* {{clc|හිස් ප්රවර්ග}}
* {{clc|Categories that are not defined in the category tree}}
* {{clc|Categories with incorrect name}}
* [[:ප්රවර්ගය:Categories needing attention]]
* [[:ප්රවර්ගය:Categories with invalid label]]
* [[:ප්රවර්ගය:අක්ෂර දෝෂ සහිත පිටු]]
* {{clc|කැටගරි-ට්රී-ශෝ භාවිතා වන පිටු}}
* {{clc|ඉක්මන් මකා දැමීම සඳහා යෝජිතයෝ}}
* [[:ප්රවර්ගය:තවමත් හිස්කර නොමැති මෘදු යළි යොමු ප්රවර්ග]]
* [[:ප්රවර්ගය:හිස් ප්රවර්ග]]
* [[:ප්රවර්ගය:විශේෂ මොඩියුල]]
* [[:ප්රවර්ගය:විශේෂ සැකිලි]]
* [[:ප්රවර්ගය:නැවත වෙනස් කළ යුතු සැකිලි]]
* [[:ප්රවර්ගය:නිවැරදි කළ යුතු උපදෙස් පිටු]]
* [[විශේෂ:සියළු_පිටු]]
* [[වික්ෂනරි:පරිපාලකවරු]]
* [[:සැකිල්ල:namespaces]]
* [[:ප්රවර්ගය:Files with no machine-readable license]]
==අනෙකුත් අඩවි==
* [[:wikibooks:si:]]
==පරිවර්ථනය කළයුතු==
* [[:Module:form of/data]]
* [[:Module:headword/templates]]
== Edits ==
* [[:en:MediaWiki:Common.css]]
* [[:en:MediaWiki:Common.js]]
* [[MediaWiki:Common.css]]
* [[MediaWiki:Common.js]]
* [[MediaWiki:Newarticletext]]
* [[:සැකිල්ල:Newarticletext]]
* [[MediaWiki:Noarticletext]]
* [[:සැකිල්ල:Noarticletext]]
* [[MediaWiki:Noexactmatch]]
* [[:සැකිල්ල:Noexactmatch]]
* [[MediaWiki:Nogomatch]]
* [[Wiktionary:Project-Nogomatch]]
* [[MediaWiki:Searchmenu-new]]
* [[:සැකිල්ල:Searchmenu-new]]
--
* [[:සැකිල්ල:alternate pages]]
* [[:පරිශීලක:Lee/usenec]]
* [[:පරිශීලක:Lee/Gadget-legacy.js]]
* [[:පරිශීලක:Lee/newentrywiz.js]]
* [[:en:User:Yair_rand]]
--
* [[Special:MyPage/skin.css]]
* [[Special:MyPage/skin.js]]
* [[:මාධ්යවිකි:Deletereason-dropdown]]
* [[:මාධ්යවිකි:Common.css]]
* [https://en.wiktionary.org/w/index.php?title=MediaWiki:Common.css&action=edit MediaWiki:Common.css]
* [[:මාධ්යවිකි:Common.js]]
* [https://en.wiktionary.org/w/index.php?title=MediaWiki:Common.js&action=edit MediaWiki:Common.js]
* [[:වික්ෂනරි:සිංහල නොවන දවසේ වචනය]]
--
* [[/කාර්යය ලැයිස්තුව]]
== මාධ්යවිකි පරිගණක මෘදුකාංගය ==
* [[MediaWiki:Clearyourcache]]
* [[MediaWiki:Scribunto-doc-page-show]]
* [[MediaWiki:Scribunto-doc-page-does-not-exist]]
* [[MediaWiki:Scribunto-common-error-category]]
* [[MediaWiki:Jsonconfig-use-category]]
[[Category:User si]]
[[Category:User en]]
[[en:User:Lee]]
nzi1oeuo06v15j1pgl2y7mqru2skdcq
193424
193419
2024-11-21T10:18:57Z
Lee
19
/* Links */
193424
wikitext
text/x-wiki
En Taro Adun!
{{බාබෙල්|footer= |si|en-3}}
==විශේෂ==
* {{tl|ඉංග්රීසි ව්යාපෘතියෙන් ආයාත කළ පිටුව}}
* {{tl|ව්යාජ නාමාවකාශය}}
* {{tl|නාමාවකාශය}}
==Editing==
* [[Module:sinhala]]
* [[Module:sinhala project]]
* [[Module:අභිධානය]]
* {{tl|ඉංග්රීසි පදය සිංහලට}}
* {{tl|ඉංග්රීසි පදය සිංහල බහු වචනයට}}
* {{tl|නැවත වෙනස් කළ යුතු සැකිලි}}
* {{tl|නිවැරදි කළ යුතු උපදෙස් පිටු}}
* {{tl|උපදෙස් උප පිටුව}}
==Links==
* {{clc|මොඩියුල දෝෂ සහිත පිටු}}
* {{clc|ප්රවර්ග පද්ධතිය තුළ අර්ථ දක්වා නොමැති ප්රවර්ග}}
* {{clc|දෝෂ සහගත නාම සහිත ප්රවර්ග}}
* {{clc|හිස් ප්රවර්ග}}
* {{clc|Categories that are not defined in the category tree}}
* {{clc|Categories with incorrect name}}
* [[:ප්රවර්ගය:Categories needing attention]]
* [[:ප්රවර්ගය:Categories with invalid label]]
* [[:ප්රවර්ගය:අක්ෂර දෝෂ සහිත පිටු]]
* {{clc|කැටගරි-ට්රී-ශෝ භාවිතා වන පිටු}}
* [[:ප්රවර්ගය:තවමත් හිස්කර නොමැති මෘදු යළි යොමු ප්රවර්ග]]
* [[:ප්රවර්ගය:හිස් ප්රවර්ග]]
* [[:ප්රවර්ගය:විශේෂ මොඩියුල]]
* [[:ප්රවර්ගය:විශේෂ සැකිලි]]
* [[:ප්රවර්ගය:නැවත වෙනස් කළ යුතු සැකිලි]]
* [[:ප්රවර්ගය:නිවැරදි කළ යුතු උපදෙස් පිටු]]
* [[විශේෂ:සියළු_පිටු]]
* [[වික්ෂනරි:පරිපාලකවරු]]
* [[:සැකිල්ල:namespaces]]
* [[:ප්රවර්ගය:Files with no machine-readable license]]
* {{clc|ඉක්මන් මකා දැමීම සඳහා යෝජිතයෝ}}
* {{clc|අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ}}
==අනෙකුත් අඩවි==
* [[:wikibooks:si:]]
==පරිවර්ථනය කළයුතු==
* [[:Module:form of/data]]
* [[:Module:headword/templates]]
== Edits ==
* [[:en:MediaWiki:Common.css]]
* [[:en:MediaWiki:Common.js]]
* [[MediaWiki:Common.css]]
* [[MediaWiki:Common.js]]
* [[MediaWiki:Newarticletext]]
* [[:සැකිල්ල:Newarticletext]]
* [[MediaWiki:Noarticletext]]
* [[:සැකිල්ල:Noarticletext]]
* [[MediaWiki:Noexactmatch]]
* [[:සැකිල්ල:Noexactmatch]]
* [[MediaWiki:Nogomatch]]
* [[Wiktionary:Project-Nogomatch]]
* [[MediaWiki:Searchmenu-new]]
* [[:සැකිල්ල:Searchmenu-new]]
--
* [[:සැකිල්ල:alternate pages]]
* [[:පරිශීලක:Lee/usenec]]
* [[:පරිශීලක:Lee/Gadget-legacy.js]]
* [[:පරිශීලක:Lee/newentrywiz.js]]
* [[:en:User:Yair_rand]]
--
* [[Special:MyPage/skin.css]]
* [[Special:MyPage/skin.js]]
* [[:මාධ්යවිකි:Deletereason-dropdown]]
* [[:මාධ්යවිකි:Common.css]]
* [https://en.wiktionary.org/w/index.php?title=MediaWiki:Common.css&action=edit MediaWiki:Common.css]
* [[:මාධ්යවිකි:Common.js]]
* [https://en.wiktionary.org/w/index.php?title=MediaWiki:Common.js&action=edit MediaWiki:Common.js]
* [[:වික්ෂනරි:සිංහල නොවන දවසේ වචනය]]
--
* [[/කාර්යය ලැයිස්තුව]]
== මාධ්යවිකි පරිගණක මෘදුකාංගය ==
* [[MediaWiki:Clearyourcache]]
* [[MediaWiki:Scribunto-doc-page-show]]
* [[MediaWiki:Scribunto-doc-page-does-not-exist]]
* [[MediaWiki:Scribunto-common-error-category]]
* [[MediaWiki:Jsonconfig-use-category]]
[[Category:User si]]
[[Category:User en]]
[[en:User:Lee]]
c8zj5pp7ls0c0tl13c59wq9d7oaltmo
සැකිල්ල:delete
10
1943
193422
60363
2024-11-21T09:45:59Z
Lee
19
ප්රවර්ගය:අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ
193422
wikitext
text/x-wiki
{{maintenance box|red
|image=[[File:Icon delete.svg|48px|link=]]
|title=මෙම {{<noinclude>temp|</noinclude>පිටු වර්ගය}} වහාම ක්රියාත්මක වන පරිදි මකා දැමීම සඳහා යෝජනා කොට ඇත. {{#if:{{{1|}}}|හේතුව ලෙස දක්වා ඇත්තේ: “{{{1}}}{{#ifeq:{{#invoke:string|sub|{{{1}}}|-1}}|.||.}}”|.}}
|text= මෙය මකා නොදැමිය යුතු යැයි ඔබ සිතන්නේ නම්, හෝ අවම වශයෙන් සාකච්ඡාවට බඳුන් විය යුතු යැයි සිතන්නේ නම්, <span class="plainlinks">[{{fullurl:{{FULLPAGENAME}}|action=edit}} මෙම පිටුව සංස්කරණය කොට]</span>, මෙම සැකිල්ල {{temp|rfd}} හෝ {{temp|rfv}} (ගැලපෙන ලෙස) වෙතට මාරු කරන්න. ඉන් පසුව [[වික්ෂනරි:මකාදැමීම සඳහා ඉල්ලීම්{{#if:{{NAMESPACE}}|/වෙනත්}}]] හෝ [[වික්ෂනරි:සත්යාපනය සඳහා ඉල්ලීම්]] වෙතට ගොස් පැහැදිලි කිරීමේ ඡේදයක් එකතු කරන්න.
}}<!--
--><includeonly><!--
-->{{#if:{{{nocat|}}}||<!--
-->[[ප්රවර්ගය:අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ|{{#if:{{{sort|}}}|{{{sort|}}}|{{PAGENAME}}}}]]<!--
-->}}<!--
--></includeonly><!--
--><noinclude>{{documentation}}</noinclude>
0sq5hb76pxh5y08u6uh53ogr9rtluiu
ප්රවර්ගය:අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ
14
2593
193420
60360
2024-11-21T09:45:15Z
Lee
19
Lee විසින් [[ප්රවර්ගය:ඉක්මන් මකා දැමීම සඳහා යෝජිතයෝ]] සිට [[ප්රවර්ගය:අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ]] වෙත පිටුව ගෙන යන ලදී
38160
wikitext
text/x-wiki
{{shortcut|WT:SD|WT:CSD}}
[[ගොනුව:Icono aviso borrar.png|60px|left]]
You can add pages to this category by using {{temp|delete}}. Sysops, please remember to check whatlinkshere and the page's history before deleting pages.
<hr style="clear:both" />
[[ප්රවර්ගය:මකාදැමීම සඳහා ඉල්ලීම්]]
glfm8f6mzf8g89tj07r7bywha92xnud
Module:languages/data/3/h
828
6265
193438
182921
2024-11-08T00:42:37Z
en>Theknightwho
0
Hainanese code changed to "hnm".
193438
Scribunto
text/plain
local m_lang = require("Module:languages")
local m_langdata = require("Module:languages/data")
local u = require("Module:string utilities").char
local c = m_langdata.chars
local p = m_langdata.puaChars
local s = m_langdata.shared
local m = {}
m["haa"] = {
"Hän",
28272,
"ath-nor",
"Latn",
}
m["hab"] = {
"Hanoi Sign Language",
12632107,
"sgn",
"Latn", -- when documented
}
m["hac"] = {
"Gurani",
33733,
"ira-zgr",
"ku-Arab",
translit = "ckb-translit",
}
m["had"] = {
"Hatam",
56825,
"paa-wpa",
}
m["haf"] = {
"Haiphong Sign Language",
39868240,
"sgn",
}
m["hag"] = {
"Hanga",
35426,
"nic-dag",
"Latn",
}
m["hah"] = {
"Hahon",
3125730,
"poz-ocw",
"Latn",
}
m["hai"] = {
"Haida",
33303,
"qfa-iso",
"Latn",
}
m["haj"] = {
"Hajong",
3350576,
"qfa-mix",
"as-Beng, Latn",
ancestors = "tbq-pro, inc-oas, inc-obn",
}
m["hak"] = {
"Hakka",
33375,
"zhx",
"Hants",
ancestors = "ltc",
generate_forms = "zh-generateforms",
sort_key = "Hani-sortkey",
}
m["hal"] = {
"Halang",
56307,
"mkh",
"Latn",
}
m["ham"] = {
"Hewa",
5748345,
"paa-spk",
}
m["hao"] = {
"Hakö",
3125871,
"poz-ocw",
"Latn",
}
m["hap"] = {
"Hupla",
5946223,
"ngf",
}
m["har"] = {
"Harari",
33626,
"sem-eth",
"Ethi",
translit = "Ethi-translit",
}
m["has"] = {
"Haisla",
3107399,
"wak",
}
m["hav"] = {
"Havu",
5684097,
"bnt-shh",
"Latn",
}
m["haw"] = {
"Hawaiian",
33569,
"poz-pep",
"Latn",
display_text = {
from = {"‘"},
to = {"ʻ"}
},
sort_key = {remove_diacritics = c.macron},
standardChars = "AaĀāEeĒēIiĪīOoŌōUuŪūHhKkLlMmNnPpWwʻ" .. c.punc,
}
m["hax"] = {
"Southern Haida",
12953543,
"qfa-iso",
ancestors = "hai",
}
m["hay"] = {
"Haya",
35756,
"bnt-haj",
}
m["hba"] = {
"Hamba",
11028905,
"bnt-tet",
}
m["hbb"] = {
"Huba",
56290,
"cdc-cbm",
}
m["hbn"] = {
"Heiban",
35523,
"alv-hei",
}
m["hbu"] = {
"Habu",
1567033,
"poz-cet",
"Latn",
}
m["hca"] = {
"Andaman Creole Hindi",
7599417,
"crp",
ancestors = "hi, bn, ta",
}
m["hch"] = {
"Huichol",
35575,
"azc",
"Latn",
}
m["hdn"] = {
"Northern Haida",
20054484,
"qfa-iso",
ancestors = "hai",
}
m["hds"] = {
"Honduras Sign Language",
3915496,
"sgn",
"Latn", -- when documented
}
m["hdy"] = {
"Hadiyya",
56613,
"cus-hec",
}
m["hea"] = {
"Northern Qiandong Miao",
3138832,
"hmn",
}
m["hed"] = {
"Herdé",
56253,
"cdc-mas",
}
m["heg"] = {
"Helong",
35432,
"poz-tim",
"Latn",
}
m["heh"] = {
"Hehe",
3129390,
"bnt-bki",
"Latn",
}
m["hei"] = {
"Heiltsuk",
5699507,
"wak",
"Latn",
}
m["hem"] = {
"Hemba",
5711209,
"bnt-lbn",
}
m["hgm"] = {
"Haiǁom",
4494781,
"khi-khk",
"Latn",
}
m["hgw"] = {
"Haigwai",
5639108,
"poz-ocw",
"Latn",
}
m["hhi"] = {
"Hoia Hoia",
5877767,
"ngf",
}
m["hhr"] = {
"Kerak",
11010783,
"alv-jfe",
}
m["hhy"] = {
"Hoyahoya",
15633149,
"ngf",
}
m["hia"] = {
"Lamang",
35700,
"cdc-cbm",
"Latn",
}
m["hib"] = {
"Hibito",
3135164,
}
m["hid"] = {
"Hidatsa",
3135234,
"sio-mor",
"Latn",
}
m["hif"] = {
"Fiji Hindi",
46728,
"inc-hie",
"Latn",
ancestors = "awa",
}
m["hig"] = {
"Kamwe",
56271,
"cdc-cbm",
}
m["hih"] = {
"Pamosu",
12953011,
"ngf-mad",
}
m["hii"] = {
"Hinduri",
5766763,
"him",
}
m["hij"] = {
"Hijuk",
35274,
"bnt-bsa",
}
m["hik"] = {
"Seit-Kaitetu",
7446989,
"poz-cma",
}
m["hil"] = {
"Hiligaynon",
35978,
"phi",
"Latn",
entry_name = {Latn = {remove_diacritics = c.grave .. c.acute .. c.circ}},
standardChars = {
Latn = "AaBbKkDdEeGgHhIiLlMmNnOoPpRrSsTtUuWwYy",
c.punc
},
sort_key = {
Latn = "tl-sortkey"
},
}
m["hio"] = {
"Tshwa",
963636,
"khi-kal",
}
m["hir"] = {
"Himarimã",
5765127,
}
m["hit"] = {
"Hittite",
35668,
"ine-ana",
"Xsux",
}
m["hiw"] = {
"Hiw",
3138713,
"poz-vnn",
"Latn",
}
m["hix"] = {
"Hixkaryana",
56522,
"sai-prk",
"Latn",
}
m["hji"] = {
"Haji",
5639933,
"poz-mly",
}
m["hka"] = {
"Kahe",
3892562,
"bnt-chg",
"Latn",
}
m["hke"] = {
"Hunde",
3065432,
"bnt-shh",
"Latn",
}
m["hkh"] = {
"Pogali",
105198619,
"inc-kas",
}
m["hkk"] = {
"Hunjara-Kaina Ke",
63213931,
"ngf",
}
m["hkn"] = {
"Mel-Khaonh",
19059577,
"mkh-ban",
}
m["hks"] = {
"Hong Kong Sign Language",
17038844,
"sgn",
}
m["hla"] = {
"Halia",
3125959,
"poz-ocw",
"Latn",
}
m["hlb"] = {
"Halbi",
3695692,
"inc-hal",
"Deva, Orya",
}
m["hld"] = {
"Halang Doan",
3914632,
"mkh-ban",
}
m["hle"] = {
"Hlersu",
5873537,
"tbq-llo",
}
m["hlt"] = {
"Nga La",
12952942,
"tbq-kuk",
}
m["hma"] = {
"Southern Mashan Hmong",
12953560,
"hmn",
"Latn",
}
m["hmb"] = {
"Humburi Senni",
35486,
"son",
}
m["hmc"] = {
"Central Huishui Hmong",
12953558,
"hmn",
}
m["hmd"] = {
"A-Hmao",
1108934,
"hmn",
"Latn, Plrd",
}
m["hme"] = {
"Eastern Huishui Hmong",
12953559,
"hmn",
}
m["hmf"] = {
"Hmong Don",
22911602,
"hmn",
}
m["hmg"] = {
"Southwestern Guiyang Hmong",
27478542,
"hmn",
}
m["hmh"] = {
"Southwestern Huishui Hmong",
12953565,
"hmn",
}
m["hmi"] = {
"Northern Huishui Hmong",
27434946,
"hmn",
}
m["hmj"] = {
"Ge",
11251864,
"hmn",
}
m["hmk"] = {
"Yemaek",
8050724,
"qfa-kor",
"Hani",
sort_key = "Hani-sortkey",
}
m["hml"] = {
"Luopohe Hmong",
14468943,
"hmn",
}
m["hmm"] = {
"Central Mashan Hmong",
12953561,
"hmn",
}
m["hmp"] = {
"Northern Mashan Hmong",
12953564,
"hmn",
}
m["hmq"] = {
"Eastern Qiandong Miao",
27431369,
"hmn",
}
m["hmr"] = {
"Hmar",
2992841,
"tbq-kuk",
ancestors = "lus",
}
m["hms"] = {
"Southern Qiandong Miao",
12953562,
"hmn",
}
m["hmt"] = {
"Hamtai",
5646436,
"ngf",
}
m["hmu"] = {
"Hamap",
12952484,
"qfa-tap",
}
m["hmv"] = {
"Hmong Dô",
22911598,
"hmn",
}
m["hmw"] = {
"Western Mashan Hmong",
12953563,
"hmn",
}
m["hmy"] = {
"Southern Guiyang Hmong",
12953553,
"hmn",
}
m["hmz"] = {
"Hmong Shua",
25559603,
"hmn",
}
m["hna"] = {
"Mina",
56532,
"cdc-cbm",
}
m["hnd"] = {
"Southern Hindko",
382273,
"inc-pan",
ancestors = "lah",
}
m["hne"] = {
"Chhattisgarhi",
33158,
"inc-hie",
"Deva",
ancestors = "inc-oaw",
translit = "hi-translit"
}
m["hnh"] = {
"ǁAni",
3832982,
"khi-kal",
"Latn",
}
m["hni"] = {
"Hani",
56516,
"tbq-han",
}
m["hnj"] = {
"Green Hmong",
3138831,
"hmn",
"Latn, Hmng, Hmnp",
}
m["hnm"] = {
"Hainanese",
934541,
"zhx-nan",
"Hants",
generate_forms = "zh-generateforms",
sort_key = "Hani-sortkey",
}
m["hnn"] = {
"Hanunoo",
35435,
"phi",
"Hano, Latn",
translit = {Hano = "hnn-translit"},
override_translit = true,
entry_name = {Latn = {remove_diacritics = c.grave .. c.acute .. c.circ}},
standardChars = {
Latn = "AaBbKkDdEeGgHhIiLlMmNnOoPpRrSsTtUuWwYy",
c.punc
},
sort_key = {
Latn = "tl-sortkey",
},
}
m["hno"] = {
"Northern Hindko",
6346358,
"inc-pan",
"Arab",
ancestors = "lah",
}
m["hns"] = {
"Caribbean Hindustani",
1843468,
"inc", -- "crp"?
ancestors = "bho, awa",
}
m["hnu"] = {
"Hung",
12632753,
"mkh-vie",
}
m["hoa"] = {
"Hoava",
3138887,
"poz-ocw",
"Latn",
}
m["hob"] = {
"Austronesian Mari",
6760941,
"poz-ocw",
"Latn",
}
m["hoc"] = {
"Ho",
33270,
"mun",
"Wara, Orya, Deva, Latn",
}
m["hod"] = {
"Holma",
56331,
"cdc-cbm",
"Latn",
}
m["hoe"] = {
"Horom",
3914008,
"nic-ple",
"Latn",
}
m["hoh"] = {
"Hobyót",
33299,
"sem-sar",
"Arab, Latn",
}
m["hoi"] = {
"Holikachuk",
28508,
"ath-nor",
"Latn",
}
m["hoj"] = {
"Hadothi",
33227,
"raj",
"Deva",
translit = "hi-translit",
}
m["hol"] = {
"Holu",
4121133,
"bnt-pen",
"Latn",
}
m["hom"] = {
"Homa",
3449953,
"bnt-boa",
"Latn",
}
m["hoo"] = {
"Holoholo",
3139484,
"bnt-tkm",
"Latn",
}
m["hop"] = {
"Hopi",
56421,
"azc",
"Latn",
}
m["hor"] = {
"Horo",
641748,
"csu-sar",
}
m["hos"] = {
"Ho Chi Minh City Sign Language",
16111971,
"sgn",
"Latn", -- when documented
}
m["hot"] = {
"Hote",
12632404,
"poz-ocw",
"Latn",
}
m["hov"] = {
"Hovongan",
5917269,
"poz",
}
m["how"] = {
"Honi",
56842,
"tbq-han",
}
m["hoy"] = {
"Holiya",
5880707,
"dra-kan",
}
m["hoz"] = {
"Hozo",
5923010,
"omv-mao",
}
m["hpo"] = {
"Hpon",
5923277,
"tbq-brm",
}
m["hps"] = {
"Hawai'i Pidgin Sign Language",
33358,
"sgn",
"Latn", -- when documented
}
m["hra"] = {
"Hrangkhol",
5923435,
"tbq-kuk",
}
m["hrc"] = {
"Niwer Mil",
30323994,
"poz-oce",
"Latn",
}
m["hre"] = {
"Hrê",
3915794,
"mkh-nbn",
}
m["hrk"] = {
"Haruku",
5675762,
"poz-cma",
}
m["hrm"] = {
"Horned Miao",
63213949,
"hmn",
}
m["hro"] = {
"Haroi",
3127568,
"cmc",
"Latn",
}
m["hrp"] = {
"Nhirrpi",
32571318,
"aus-kar",
}
m["hrt"] = {
"Hértevin",
33290,
"sem-nna",
"Latn",
}
m["hru"] = {
"Hruso",
5923933,
"sit-hrs",
}
m["hrw"] = {
"Warwar Feni",
56704265,
"poz-oce",
"Latn",
}
m["hrx"] = {
"Hunsrik",
304049,
"gmw-hgm",
"Latn",
ancestors = "gmw-cfr",
}
m["hrz"] = {
"Harzani",
56464,
"xme-ttc",
ancestors = "xme-ttc-nor",
}
m["hsb"] = {
"Upper Sorbian",
13248,
"wen",
"Latn",
sort_key = s["wen-sortkey"],
}
m["hsh"] = {
"Hungarian Sign Language",
13636869,
"sgn",
"Latn", -- when documented
}
m["hsl"] = {
"Hausa Sign Language",
3915462,
"sgn",
"Latn", -- when documented
}
m["hsn"] = {
"Xiang",
13220,
"zhx",
"Hants",
ancestors = "ltc",
generate_forms = "zh-generateforms",
translit = "zh-translit",
sort_key = "Hani-sortkey",
}
m["hss"] = {
"Harsusi",
33423,
"sem-sar",
"Arab, Latn",
}
m["hti"] = {
"Hoti",
5912372,
"poz-cma",
"Latn",
}
m["hto"] = {
"Minica Huitoto",
948514,
"sai-wit",
"Latn",
}
m["hts"] = {
"Hadza",
33411,
"qfa-iso",
"Latn",
}
m["htu"] = {
"Hitu",
5872700,
"poz-cma",
"Latn",
}
m["hub"] = {
"Huambisa",
1526037,
"sai-jiv",
"Latn",
}
m["huc"] = {
"ǂHoan",
2053913,
"khi-kxa",
"Latn",
}
m["hud"] = {
"Huaulu",
12952504,
"poz-cma",
"Latn",
}
m["huf"] = {
"Humene",
11732231,
"ngf",
"Latn",
}
m["hug"] = {
"Huachipaeri",
3446617,
"sai-har",
"Latn",
}
m["huh"] = {
"Huilliche",
35531,
"sai-ara",
"Latn",
}
m["hui"] = {
"Huli",
3125121,
"paa-eng",
"Latn",
}
m["huj"] = {
"Northern Guiyang Hmong",
12953554,
"hmn",
}
m["huk"] = {
"Hulung",
12952505,
"poz-cet",
}
m["hul"] = {
"Hula",
6382179,
"poz-ocw",
"Latn",
}
m["hum"] = {
"Hungana",
10975396,
"bnt-yak",
}
m["huo"] = {
"Hu",
3141783,
"mkh-pal",
}
m["hup"] = {
"Hupa",
28058,
"ath-pco",
"Latn",
}
m["huq"] = {
"Tsat",
34133,
"cmc",
}
m["hur"] = {
"Halkomelem",
35388,
"sal",
"Latn",
}
m["hus"] = {
"Wastek",
35573,
"myn",
"Latn",
}
m["huu"] = {
"Murui Huitoto",
2640935,
"sai-wit",
"Latn",
}
m["huv"] = {
"Huave",
12954031,
"qfa-iso",
"Latn",
}
m["huw"] = {
"Hukumina",
3142988,
"poz-cma",
"Latn",
}
m["hux"] = {
"Nüpode Huitoto",
56333,
"sai-wit",
"Latn",
}
m["huy"] = {
"Hulaulá",
33426,
"sem-nna",
}
m["huz"] = {
"Hunzib",
56564,
"cau-ets",
"Cyrl",
translit = "huz-translit",
display_text = {Cyrl = s["cau-Cyrl-displaytext"]},
entry_name = {Cyrl = s["cau-Cyrl-entryname"]},
}
m["hvc"] = {
"Haitian Vodoun Culture Language",
3504239,
"crp",
"Latn",
}
m["hvk"] = {
"Haveke",
5683513,
"poz-cln",
"Latn",
}
m["hvn"] = {
"Sabu",
3128792,
"poz-cet",
"Latn",
}
m["hwa"] = {
"Wané",
3914887,
"kro-ekr",
"Latn",
}
m["hwc"] = {
"Hawaiian Creole",
35602,
"crp",
"Latn",
}
m["hwo"] = {
"Hwana",
56498,
"cdc-cbm",
"Latn",
}
m["hya"] = {
"Hya",
56798,
"cdc-cbm",
"Latn",
}
return m_lang.finalizeLanguageData(m_lang.addDefaultTypes(m, true))
ia75i4gmgx44dej9cxm4cqrmvilehsx
193439
193438
2024-11-21T10:28:26Z
Lee
19
[[:en:Module:languages/data/3/h]] වෙතින් එක් සංශෝධනයක්
193438
Scribunto
text/plain
local m_lang = require("Module:languages")
local m_langdata = require("Module:languages/data")
local u = require("Module:string utilities").char
local c = m_langdata.chars
local p = m_langdata.puaChars
local s = m_langdata.shared
local m = {}
m["haa"] = {
"Hän",
28272,
"ath-nor",
"Latn",
}
m["hab"] = {
"Hanoi Sign Language",
12632107,
"sgn",
"Latn", -- when documented
}
m["hac"] = {
"Gurani",
33733,
"ira-zgr",
"ku-Arab",
translit = "ckb-translit",
}
m["had"] = {
"Hatam",
56825,
"paa-wpa",
}
m["haf"] = {
"Haiphong Sign Language",
39868240,
"sgn",
}
m["hag"] = {
"Hanga",
35426,
"nic-dag",
"Latn",
}
m["hah"] = {
"Hahon",
3125730,
"poz-ocw",
"Latn",
}
m["hai"] = {
"Haida",
33303,
"qfa-iso",
"Latn",
}
m["haj"] = {
"Hajong",
3350576,
"qfa-mix",
"as-Beng, Latn",
ancestors = "tbq-pro, inc-oas, inc-obn",
}
m["hak"] = {
"Hakka",
33375,
"zhx",
"Hants",
ancestors = "ltc",
generate_forms = "zh-generateforms",
sort_key = "Hani-sortkey",
}
m["hal"] = {
"Halang",
56307,
"mkh",
"Latn",
}
m["ham"] = {
"Hewa",
5748345,
"paa-spk",
}
m["hao"] = {
"Hakö",
3125871,
"poz-ocw",
"Latn",
}
m["hap"] = {
"Hupla",
5946223,
"ngf",
}
m["har"] = {
"Harari",
33626,
"sem-eth",
"Ethi",
translit = "Ethi-translit",
}
m["has"] = {
"Haisla",
3107399,
"wak",
}
m["hav"] = {
"Havu",
5684097,
"bnt-shh",
"Latn",
}
m["haw"] = {
"Hawaiian",
33569,
"poz-pep",
"Latn",
display_text = {
from = {"‘"},
to = {"ʻ"}
},
sort_key = {remove_diacritics = c.macron},
standardChars = "AaĀāEeĒēIiĪīOoŌōUuŪūHhKkLlMmNnPpWwʻ" .. c.punc,
}
m["hax"] = {
"Southern Haida",
12953543,
"qfa-iso",
ancestors = "hai",
}
m["hay"] = {
"Haya",
35756,
"bnt-haj",
}
m["hba"] = {
"Hamba",
11028905,
"bnt-tet",
}
m["hbb"] = {
"Huba",
56290,
"cdc-cbm",
}
m["hbn"] = {
"Heiban",
35523,
"alv-hei",
}
m["hbu"] = {
"Habu",
1567033,
"poz-cet",
"Latn",
}
m["hca"] = {
"Andaman Creole Hindi",
7599417,
"crp",
ancestors = "hi, bn, ta",
}
m["hch"] = {
"Huichol",
35575,
"azc",
"Latn",
}
m["hdn"] = {
"Northern Haida",
20054484,
"qfa-iso",
ancestors = "hai",
}
m["hds"] = {
"Honduras Sign Language",
3915496,
"sgn",
"Latn", -- when documented
}
m["hdy"] = {
"Hadiyya",
56613,
"cus-hec",
}
m["hea"] = {
"Northern Qiandong Miao",
3138832,
"hmn",
}
m["hed"] = {
"Herdé",
56253,
"cdc-mas",
}
m["heg"] = {
"Helong",
35432,
"poz-tim",
"Latn",
}
m["heh"] = {
"Hehe",
3129390,
"bnt-bki",
"Latn",
}
m["hei"] = {
"Heiltsuk",
5699507,
"wak",
"Latn",
}
m["hem"] = {
"Hemba",
5711209,
"bnt-lbn",
}
m["hgm"] = {
"Haiǁom",
4494781,
"khi-khk",
"Latn",
}
m["hgw"] = {
"Haigwai",
5639108,
"poz-ocw",
"Latn",
}
m["hhi"] = {
"Hoia Hoia",
5877767,
"ngf",
}
m["hhr"] = {
"Kerak",
11010783,
"alv-jfe",
}
m["hhy"] = {
"Hoyahoya",
15633149,
"ngf",
}
m["hia"] = {
"Lamang",
35700,
"cdc-cbm",
"Latn",
}
m["hib"] = {
"Hibito",
3135164,
}
m["hid"] = {
"Hidatsa",
3135234,
"sio-mor",
"Latn",
}
m["hif"] = {
"Fiji Hindi",
46728,
"inc-hie",
"Latn",
ancestors = "awa",
}
m["hig"] = {
"Kamwe",
56271,
"cdc-cbm",
}
m["hih"] = {
"Pamosu",
12953011,
"ngf-mad",
}
m["hii"] = {
"Hinduri",
5766763,
"him",
}
m["hij"] = {
"Hijuk",
35274,
"bnt-bsa",
}
m["hik"] = {
"Seit-Kaitetu",
7446989,
"poz-cma",
}
m["hil"] = {
"Hiligaynon",
35978,
"phi",
"Latn",
entry_name = {Latn = {remove_diacritics = c.grave .. c.acute .. c.circ}},
standardChars = {
Latn = "AaBbKkDdEeGgHhIiLlMmNnOoPpRrSsTtUuWwYy",
c.punc
},
sort_key = {
Latn = "tl-sortkey"
},
}
m["hio"] = {
"Tshwa",
963636,
"khi-kal",
}
m["hir"] = {
"Himarimã",
5765127,
}
m["hit"] = {
"Hittite",
35668,
"ine-ana",
"Xsux",
}
m["hiw"] = {
"Hiw",
3138713,
"poz-vnn",
"Latn",
}
m["hix"] = {
"Hixkaryana",
56522,
"sai-prk",
"Latn",
}
m["hji"] = {
"Haji",
5639933,
"poz-mly",
}
m["hka"] = {
"Kahe",
3892562,
"bnt-chg",
"Latn",
}
m["hke"] = {
"Hunde",
3065432,
"bnt-shh",
"Latn",
}
m["hkh"] = {
"Pogali",
105198619,
"inc-kas",
}
m["hkk"] = {
"Hunjara-Kaina Ke",
63213931,
"ngf",
}
m["hkn"] = {
"Mel-Khaonh",
19059577,
"mkh-ban",
}
m["hks"] = {
"Hong Kong Sign Language",
17038844,
"sgn",
}
m["hla"] = {
"Halia",
3125959,
"poz-ocw",
"Latn",
}
m["hlb"] = {
"Halbi",
3695692,
"inc-hal",
"Deva, Orya",
}
m["hld"] = {
"Halang Doan",
3914632,
"mkh-ban",
}
m["hle"] = {
"Hlersu",
5873537,
"tbq-llo",
}
m["hlt"] = {
"Nga La",
12952942,
"tbq-kuk",
}
m["hma"] = {
"Southern Mashan Hmong",
12953560,
"hmn",
"Latn",
}
m["hmb"] = {
"Humburi Senni",
35486,
"son",
}
m["hmc"] = {
"Central Huishui Hmong",
12953558,
"hmn",
}
m["hmd"] = {
"A-Hmao",
1108934,
"hmn",
"Latn, Plrd",
}
m["hme"] = {
"Eastern Huishui Hmong",
12953559,
"hmn",
}
m["hmf"] = {
"Hmong Don",
22911602,
"hmn",
}
m["hmg"] = {
"Southwestern Guiyang Hmong",
27478542,
"hmn",
}
m["hmh"] = {
"Southwestern Huishui Hmong",
12953565,
"hmn",
}
m["hmi"] = {
"Northern Huishui Hmong",
27434946,
"hmn",
}
m["hmj"] = {
"Ge",
11251864,
"hmn",
}
m["hmk"] = {
"Yemaek",
8050724,
"qfa-kor",
"Hani",
sort_key = "Hani-sortkey",
}
m["hml"] = {
"Luopohe Hmong",
14468943,
"hmn",
}
m["hmm"] = {
"Central Mashan Hmong",
12953561,
"hmn",
}
m["hmp"] = {
"Northern Mashan Hmong",
12953564,
"hmn",
}
m["hmq"] = {
"Eastern Qiandong Miao",
27431369,
"hmn",
}
m["hmr"] = {
"Hmar",
2992841,
"tbq-kuk",
ancestors = "lus",
}
m["hms"] = {
"Southern Qiandong Miao",
12953562,
"hmn",
}
m["hmt"] = {
"Hamtai",
5646436,
"ngf",
}
m["hmu"] = {
"Hamap",
12952484,
"qfa-tap",
}
m["hmv"] = {
"Hmong Dô",
22911598,
"hmn",
}
m["hmw"] = {
"Western Mashan Hmong",
12953563,
"hmn",
}
m["hmy"] = {
"Southern Guiyang Hmong",
12953553,
"hmn",
}
m["hmz"] = {
"Hmong Shua",
25559603,
"hmn",
}
m["hna"] = {
"Mina",
56532,
"cdc-cbm",
}
m["hnd"] = {
"Southern Hindko",
382273,
"inc-pan",
ancestors = "lah",
}
m["hne"] = {
"Chhattisgarhi",
33158,
"inc-hie",
"Deva",
ancestors = "inc-oaw",
translit = "hi-translit"
}
m["hnh"] = {
"ǁAni",
3832982,
"khi-kal",
"Latn",
}
m["hni"] = {
"Hani",
56516,
"tbq-han",
}
m["hnj"] = {
"Green Hmong",
3138831,
"hmn",
"Latn, Hmng, Hmnp",
}
m["hnm"] = {
"Hainanese",
934541,
"zhx-nan",
"Hants",
generate_forms = "zh-generateforms",
sort_key = "Hani-sortkey",
}
m["hnn"] = {
"Hanunoo",
35435,
"phi",
"Hano, Latn",
translit = {Hano = "hnn-translit"},
override_translit = true,
entry_name = {Latn = {remove_diacritics = c.grave .. c.acute .. c.circ}},
standardChars = {
Latn = "AaBbKkDdEeGgHhIiLlMmNnOoPpRrSsTtUuWwYy",
c.punc
},
sort_key = {
Latn = "tl-sortkey",
},
}
m["hno"] = {
"Northern Hindko",
6346358,
"inc-pan",
"Arab",
ancestors = "lah",
}
m["hns"] = {
"Caribbean Hindustani",
1843468,
"inc", -- "crp"?
ancestors = "bho, awa",
}
m["hnu"] = {
"Hung",
12632753,
"mkh-vie",
}
m["hoa"] = {
"Hoava",
3138887,
"poz-ocw",
"Latn",
}
m["hob"] = {
"Austronesian Mari",
6760941,
"poz-ocw",
"Latn",
}
m["hoc"] = {
"Ho",
33270,
"mun",
"Wara, Orya, Deva, Latn",
}
m["hod"] = {
"Holma",
56331,
"cdc-cbm",
"Latn",
}
m["hoe"] = {
"Horom",
3914008,
"nic-ple",
"Latn",
}
m["hoh"] = {
"Hobyót",
33299,
"sem-sar",
"Arab, Latn",
}
m["hoi"] = {
"Holikachuk",
28508,
"ath-nor",
"Latn",
}
m["hoj"] = {
"Hadothi",
33227,
"raj",
"Deva",
translit = "hi-translit",
}
m["hol"] = {
"Holu",
4121133,
"bnt-pen",
"Latn",
}
m["hom"] = {
"Homa",
3449953,
"bnt-boa",
"Latn",
}
m["hoo"] = {
"Holoholo",
3139484,
"bnt-tkm",
"Latn",
}
m["hop"] = {
"Hopi",
56421,
"azc",
"Latn",
}
m["hor"] = {
"Horo",
641748,
"csu-sar",
}
m["hos"] = {
"Ho Chi Minh City Sign Language",
16111971,
"sgn",
"Latn", -- when documented
}
m["hot"] = {
"Hote",
12632404,
"poz-ocw",
"Latn",
}
m["hov"] = {
"Hovongan",
5917269,
"poz",
}
m["how"] = {
"Honi",
56842,
"tbq-han",
}
m["hoy"] = {
"Holiya",
5880707,
"dra-kan",
}
m["hoz"] = {
"Hozo",
5923010,
"omv-mao",
}
m["hpo"] = {
"Hpon",
5923277,
"tbq-brm",
}
m["hps"] = {
"Hawai'i Pidgin Sign Language",
33358,
"sgn",
"Latn", -- when documented
}
m["hra"] = {
"Hrangkhol",
5923435,
"tbq-kuk",
}
m["hrc"] = {
"Niwer Mil",
30323994,
"poz-oce",
"Latn",
}
m["hre"] = {
"Hrê",
3915794,
"mkh-nbn",
}
m["hrk"] = {
"Haruku",
5675762,
"poz-cma",
}
m["hrm"] = {
"Horned Miao",
63213949,
"hmn",
}
m["hro"] = {
"Haroi",
3127568,
"cmc",
"Latn",
}
m["hrp"] = {
"Nhirrpi",
32571318,
"aus-kar",
}
m["hrt"] = {
"Hértevin",
33290,
"sem-nna",
"Latn",
}
m["hru"] = {
"Hruso",
5923933,
"sit-hrs",
}
m["hrw"] = {
"Warwar Feni",
56704265,
"poz-oce",
"Latn",
}
m["hrx"] = {
"Hunsrik",
304049,
"gmw-hgm",
"Latn",
ancestors = "gmw-cfr",
}
m["hrz"] = {
"Harzani",
56464,
"xme-ttc",
ancestors = "xme-ttc-nor",
}
m["hsb"] = {
"Upper Sorbian",
13248,
"wen",
"Latn",
sort_key = s["wen-sortkey"],
}
m["hsh"] = {
"Hungarian Sign Language",
13636869,
"sgn",
"Latn", -- when documented
}
m["hsl"] = {
"Hausa Sign Language",
3915462,
"sgn",
"Latn", -- when documented
}
m["hsn"] = {
"Xiang",
13220,
"zhx",
"Hants",
ancestors = "ltc",
generate_forms = "zh-generateforms",
translit = "zh-translit",
sort_key = "Hani-sortkey",
}
m["hss"] = {
"Harsusi",
33423,
"sem-sar",
"Arab, Latn",
}
m["hti"] = {
"Hoti",
5912372,
"poz-cma",
"Latn",
}
m["hto"] = {
"Minica Huitoto",
948514,
"sai-wit",
"Latn",
}
m["hts"] = {
"Hadza",
33411,
"qfa-iso",
"Latn",
}
m["htu"] = {
"Hitu",
5872700,
"poz-cma",
"Latn",
}
m["hub"] = {
"Huambisa",
1526037,
"sai-jiv",
"Latn",
}
m["huc"] = {
"ǂHoan",
2053913,
"khi-kxa",
"Latn",
}
m["hud"] = {
"Huaulu",
12952504,
"poz-cma",
"Latn",
}
m["huf"] = {
"Humene",
11732231,
"ngf",
"Latn",
}
m["hug"] = {
"Huachipaeri",
3446617,
"sai-har",
"Latn",
}
m["huh"] = {
"Huilliche",
35531,
"sai-ara",
"Latn",
}
m["hui"] = {
"Huli",
3125121,
"paa-eng",
"Latn",
}
m["huj"] = {
"Northern Guiyang Hmong",
12953554,
"hmn",
}
m["huk"] = {
"Hulung",
12952505,
"poz-cet",
}
m["hul"] = {
"Hula",
6382179,
"poz-ocw",
"Latn",
}
m["hum"] = {
"Hungana",
10975396,
"bnt-yak",
}
m["huo"] = {
"Hu",
3141783,
"mkh-pal",
}
m["hup"] = {
"Hupa",
28058,
"ath-pco",
"Latn",
}
m["huq"] = {
"Tsat",
34133,
"cmc",
}
m["hur"] = {
"Halkomelem",
35388,
"sal",
"Latn",
}
m["hus"] = {
"Wastek",
35573,
"myn",
"Latn",
}
m["huu"] = {
"Murui Huitoto",
2640935,
"sai-wit",
"Latn",
}
m["huv"] = {
"Huave",
12954031,
"qfa-iso",
"Latn",
}
m["huw"] = {
"Hukumina",
3142988,
"poz-cma",
"Latn",
}
m["hux"] = {
"Nüpode Huitoto",
56333,
"sai-wit",
"Latn",
}
m["huy"] = {
"Hulaulá",
33426,
"sem-nna",
}
m["huz"] = {
"Hunzib",
56564,
"cau-ets",
"Cyrl",
translit = "huz-translit",
display_text = {Cyrl = s["cau-Cyrl-displaytext"]},
entry_name = {Cyrl = s["cau-Cyrl-entryname"]},
}
m["hvc"] = {
"Haitian Vodoun Culture Language",
3504239,
"crp",
"Latn",
}
m["hvk"] = {
"Haveke",
5683513,
"poz-cln",
"Latn",
}
m["hvn"] = {
"Sabu",
3128792,
"poz-cet",
"Latn",
}
m["hwa"] = {
"Wané",
3914887,
"kro-ekr",
"Latn",
}
m["hwc"] = {
"Hawaiian Creole",
35602,
"crp",
"Latn",
}
m["hwo"] = {
"Hwana",
56498,
"cdc-cbm",
"Latn",
}
m["hya"] = {
"Hya",
56798,
"cdc-cbm",
"Latn",
}
return m_lang.finalizeLanguageData(m_lang.addDefaultTypes(m, true))
ia75i4gmgx44dej9cxm4cqrmvilehsx
Module:labels/data/topical
828
7988
193329
185274
2024-11-10T22:08:14Z
en>Ktom
0
+inheritance law
193329
Scribunto
text/plain
local labels = {}
-- This file is split into two sections: topical labels and labels for set-type categories.
-- Each section is sorted alphabetically.
-- Topical labels
labels["ABDL"] = {
display = "[[ABDL]]",
topical_categories = true,
}
labels["Abrahamism"] = {
display = "[[Abrahamism#Noun|Abrahamism]]",
topical_categories = true,
}
labels["accounting"] = {
display = "[[accounting#Noun|accounting]]",
topical_categories = true,
}
labels["acoustics"] = {
display = "[[acoustics]]",
topical_categories = true,
}
labels["acting"] = {
display = "[[acting#Noun|acting]]",
topical_categories = true,
}
labels["advertising"] = {
display = "[[advertising#Noun|advertising]]",
topical_categories = true,
}
labels["aeronautics"] = {
display = "[[aeronautics]]",
topical_categories = true,
}
labels["aerospace"] = {
display = "[[aerospace]]",
topical_categories = true,
}
labels["aesthetic"] = {
aliases = {"aesthetics"},
display = "[[aesthetic]]",
topical_categories = "Aesthetics",
}
labels["agriculture"] = {
aliases = {"farming"},
display = "[[agriculture]]",
topical_categories = true,
}
labels["Ahmadiyya"] = {
aliases = {"Ahmadiyyat", "Ahmadi"},
display = "[[Ahmadiyya]]",
topical_categories = true,
}
labels["aircraft"] = {
display = "[[aircraft]]",
topical_categories = true,
}
labels["alchemy"] = {
display = "[[alchemy]]",
topical_categories = true,
}
labels["alcoholic beverages"] = {
aliases = {"alcohol"},
display = "[[alcoholic#Adjective|alcoholic]] [[beverage]]s",
topical_categories = true,
}
labels["alcoholism"] = {
display = "[[alcoholism]]",
topical_categories = true,
}
labels["algebra"] = {
display = "[[algebra]]",
topical_categories = true,
}
labels["algebraic geometry"] = {
display = "[[algebraic geometry]]",
topical_categories = true,
}
labels["algebraic topology"] = {
display = "[[algebraic topology]]",
topical_categories = true,
}
labels["alt-right"] = {
aliases = {"Alt-right", "altright", "Altright"},
display = "[[alt-right]]",
topical_categories = true,
}
labels["alternative medicine"] = {
display = "[[alternative medicine]]",
topical_categories = true,
}
labels["amateur radio"] = {
aliases = {"ham radio"},
display = "[[amateur radio]]",
topical_categories = true,
}
labels["American football"] = {
display = "[[American football]]",
topical_categories = "Football (American)",
}
labels["analytic geometry"] = {
display = "[[analytic geometry]]",
topical_categories = "Geometry",
}
labels["analytical chemistry"] = {
display = "[[analytical]] [[chemistry]]",
topical_categories = true,
}
labels["anarchism"] = {
display = "[[anarchism]]",
topical_categories = true,
}
labels["anatomy"] = {
display = "[[anatomy]]",
topical_categories = true,
}
labels["Ancient Greece"] = {
display = "[[Ancient Greece]]",
topical_categories = true,
}
labels["Ancient Rome"] = {
display = "[[Ancient Rome]]",
topical_categories = true,
}
labels["Anglicanism"] = {
aliases = {"Anglican"},
display = "[[Anglicanism]]",
topical_categories = true,
}
labels["animation"] = {
display = "[[animation]]",
topical_categories = true,
}
labels["anime"] = {
display = "[[anime]]",
topical_categories = "Japanese fiction",
}
labels["anthropology"] = {
display = "[[anthropology]]",
topical_categories = true,
}
labels["arachnology"] = {
display = "[[arachnology]]",
topical_categories = true,
}
labels["Arabian god"] = {
display = "[[Arabian]] [[mythology]]",
topical_categories = "Arabian deities",
}
labels["archaeological culture"] = {
aliases = {"archeological culture", "archaeological cultures", "archeological cultures"},
display = "[[archaeology]]",
topical_categories = "Archaeological cultures",
}
labels["archaeology"] = {
aliases = {"archeology"},
display = "[[archaeology]]",
topical_categories = true,
}
labels["archery"] = {
display = "[[archery]]",
topical_categories = true,
}
labels["architecture"] = {
display = "[[architecture]]",
topical_categories = true,
}
labels["arithmetic"] = {
display = "[[arithmetic]]",
topical_categories = true,
}
labels["Armenian mythology"] = {
display = "[[Armenian]] [[mythology]]",
topical_categories = true,
}
labels["art"] = {
aliases = {"arts"},
display = "[[art#Noun|art]]",
topical_categories = true,
}
labels["artificial intelligence"] = {
aliases = {"AI"},
display = "[[artificial intelligence]]",
topical_categories = true,
}
labels["artillery"] = {
display = "[[weaponry]]",
topical_categories = true,
}
labels["Arthurian legend"] = {
aliases = {"Arthurian mythology"},
display = "[[w:Arthurian legend|Arthurian legend]]",
topical_categories = "Arthurian mythology",
}
labels["astrology"] = {
aliases = {"horoscope", "zodiac"},
display = "[[astrology]]",
topical_categories = true,
}
labels["astronautics"] = {
aliases = {"rocketry"},
display = "[[astronautics]]",
topical_categories = true,
}
labels["astronomy"] = {
display = "[[astronomy]]",
topical_categories = true,
}
labels["astrophysics"] = {
display = "[[astrophysics]]",
topical_categories = true,
}
labels["Asturian mythology"] = {
display = "[[Asturian]] [[mythology]]",
topical_categories = true,
}
labels["athletics"] = {
display = "[[athletics]]",
topical_categories = true,
}
labels["Australian Aboriginal mythology"] = {
display = "[[w:Australian Aboriginal religion and mythology|Australian Aboriginal mythology]]",
topical_categories = true,
}
labels["Australian rules football"] = {
display = "[[Australian rules football]]",
topical_categories = true,
}
labels["autism"] = {
display = "[[autism]]",
topical_categories = true,
}
labels["automotive"] = {
aliases = {"automotives"},
display = "[[automotive]]",
topical_categories = true,
}
labels["aviation"] = {
aliases = {"air transport"},
display = "[[aviation]]",
topical_categories = true,
}
labels["backgammon"] = {
display = "[[backgammon]]",
topical_categories = true,
}
labels["bacteria"] = {
display = "[[bacteriology]]",
topical_categories = true,
}
labels["bacteriology"] = {
display = "[[bacteriology]]",
topical_categories = true,
}
labels["badminton"] = {
display = "[[badminton]]",
topical_categories = true,
}
labels["baking"] = {
display = "[[baking#Noun|baking]]",
topical_categories = true,
}
labels["ball games"] = {
aliases = {"ball sports"},
display = "[[ball game]]s",
topical_categories = true,
}
labels["ballet"] = {
display = "[[ballet]]",
topical_categories = true,
}
labels["Bangladeshi politics"] = {
display = "[[w:Politics of Bangladesh|Bangladeshi politics]]",
topical_categories = true,
}
labels["banking"] = {
display = "[[banking#Noun|banking]]",
topical_categories = true,
}
labels["baseball"] = {
display = "[[baseball]]",
topical_categories = true,
}
labels["basketball"] = {
display = "[[basketball]]",
topical_categories = true,
}
labels["BDSM"] = {
display = "[[BDSM]]",
topical_categories = true,
}
labels["beekeeping"] = {
display = "[[beekeeping]]",
topical_categories = true,
}
labels["beer"] = {
display = "[[beer]]",
topical_categories = true,
}
labels["betting"] = {
display = "[[gambling#Noun|gambling]]",
topical_categories = true,
}
labels["biblical"] = {
aliases = {"Bible", "bible", "Biblical"},
display = "[[Bible|biblical]]",
topical_categories = "Bible",
}
labels["billiards"] = {
display = "[[billiards]]",
topical_categories = true,
}
labels["bingo"] = {
display = "[[bingo]]",
topical_categories = true,
}
labels["biochemistry"] = {
display = "[[biochemistry]]",
topical_categories = true,
}
labels["biology"] = {
display = "[[biology]]",
topical_categories = true,
}
labels["biotechnology"] = {
display = "[[biotechnology]]",
topical_categories = true,
}
labels["birdwatching"] = {
display = "[[birdwatching#Noun|birdwatching]]",
topical_categories = true,
}
labels["blacksmithing"] = {
display = "[[blacksmithing]]",
topical_categories = true,
}
labels["blogging"] = {
display = "[[blogging#Noun|blogging]]",
topical_categories = "Internet",
}
labels["board games"] = {
aliases = {"board game"},
display = "[[board game]]s",
topical_categories = true,
}
labels["board sports"] = {
display = "[[boardsport|board sports]]",
topical_categories = true,
}
labels["bodybuilding"] = {
display = "[[bodybuilding#Noun|bodybuilding]]",
topical_categories = true,
}
labels["botany"] = {
display = "[[botany]]",
topical_categories = true,
}
labels["bowling"] = {
display = "[[bowling#Noun|bowling]]",
topical_categories = true,
}
labels["bowls"] = {
aliases = {"lawn bowls", "crown green bowls"},
display = "[[bowls]]",
topical_categories = "Bowls (game)",
}
labels["boxing"] = {
display = "[[boxing#Noun|boxing]]",
topical_categories = true,
}
labels["brewing"] = {
display = "[[brewing#Noun|brewing]]",
topical_categories = true,
}
labels["bridge"] = {
display = "[[bridge#English:_game|bridge]]",
topical_categories = true,
}
labels["broadcasting"] = {
display = "[[broadcasting#Noun|broadcasting]]",
topical_categories = true,
}
labels["bryology"] = {
display = "[[bryology]]",
topical_categories = true,
}
labels["Buddhism"] = {
display = "[[Buddhism]]",
topical_categories = true,
}
labels["Buddhist deity"] = {
aliases = {"Buddhist goddess", "Buddhist god"},
display = "[[Buddhism]]",
topical_categories = "Buddhist deities",
}
labels["bullfighting"] = {
display = "[[bullfighting]]",
topical_categories = true,
}
labels["business"] = {
aliases = {"professional"},
display = "[[business]]",
topical_categories = true,
}
labels["Byzantine Empire"] = {
display = "[[Byzantine Empire]]",
topical_categories = true,
}
labels["calculus"] = {
display = "[[calculus]]",
topical_categories = true,
}
labels["calligraphy"] = {
display = "[[calligraphy]]",
topical_categories = true,
}
labels["Canadian football"] = {
display = "[[Canadian football]]",
topical_categories = true,
}
labels["canoeing"] = {
display = "[[canoeing#Noun|canoeing]]",
topical_categories = "Water sports",
}
labels["capitalism"] = {
display = "[[capitalism]]",
topical_categories = true,
}
labels["card games"] = {
aliases = {"cards", "card game", "playing card"},
display = "[[card game]]s",
topical_categories = true,
}
labels["cardiology"] = {
display = "[[cardiology]]",
topical_categories = true,
}
labels["carpentry"] = {
display = "[[carpentry]]",
topical_categories = true,
}
labels["cartography"] = {
display = "[[cartography]]",
topical_categories = true,
}
labels["cartomancy"] = {
display = "[[cartomancy]]",
topical_categories = true,
}
labels["castells"] = {
display = "[[castells]]",
topical_categories = true,
}
labels["category theory"] = {
display = "[[category theory]]",
topical_categories = true,
}
labels["Catholicism"] = {
aliases = {"catholicism", "Catholic", "catholic"},
display = "[[Catholicism]]",
topical_categories = true,
}
labels["caving"] = {
display = "[[caving#Noun|caving]]",
topical_categories = true,
}
labels["cellular automata"] = {
display = "[[cellular automata]]",
topical_categories = true,
}
labels["Celtic mythology"] = {
display = "[[Celtic]] [[mythology]]",
topical_categories = true,
}
labels["ceramics"] = {
display = "[[ceramics]]",
topical_categories = true,
}
labels["cheerleading"] = {
display = "[[cheerleading#Noun|cheerleading]]",
topical_categories = true,
}
labels["chemical element"] = {
display = "[[chemistry]]",
topical_categories = "Chemical elements",
}
labels["chemical engineering"] = {
display = "[[chemical engineering]]",
topical_categories = true,
}
labels["chemistry"] = {
display = "[[chemistry]]",
topical_categories = true,
}
labels["chess"] = {
display = "[[chess]]",
topical_categories = true,
}
labels["children's games"] = {
display = "[[children|children's]] [[game]]s",
topical_categories = true,
}
labels["Church of England"] = {
aliases = {"C of E", "CofE"},
Wikipedia = "Church of England",
topical_categories = true,
}
labels["Chinese astronomy"] = {
display = "[[Chinese]] [[astronomy]]",
topical_categories = true,
}
labels["Chinese calligraphy"] = {
display = "[[Chinese]] [[calligraphy]]",
topical_categories = "Calligraphy",
}
labels["Chinese constellation"] = {
display = "[[Chinese]] [[astronomy]]",
topical_categories = "Constellations",
}
labels["Chinese folk religion"] = {
display = "[[Chinese]] [[folk religion]]",
topical_categories = "Religion",
}
labels["Chinese linguistics"] = {
display = "[[Chinese]] [[linguistics]]",
topical_categories = "Linguistics",
}
labels["Chinese mythology"] = {
display = "[[Chinese]] [[mythology]]",
topical_categories = true,
}
labels["Chinese philosophy"] = {
display = "[[Chinese]] [[philosophy]]",
topical_categories = true,
}
labels["Chinese phonetics"] = {
display = "[[Chinese]] [[phonetics]]",
topical_categories = true,
}
labels["Chinese religion"] = {
display = "[[Chinese]] [[religion]]",
topical_categories = "Religion",
}
labels["Chinese star"] = {
display = "[[Chinese]] [[astronomy]]",
topical_categories = "Stars",
}
labels["Christianity"] = {
aliases = {"christianity", "Christian", "christian"},
display = "[[Christianity]]",
topical_categories = true,
}
labels["Church of the East"] = {
display = "[[Church of the East]]",
topical_categories = true,
}
labels["cinematography"] = {
aliases = {"filmology"},
display = "[[cinematography]]",
topical_categories = true,
}
labels["cladistics"] = {
display = "[[cladistics]]",
topical_categories = "Taxonomy",
}
labels["classical mechanics"] = {
display = "[[classical mechanics]]",
topical_categories = true,
}
labels["classical studies"] = {
display = "[[classical studies]]",
topical_categories = true,
}
labels["climatology"] = {
display = "[[climatology]]",
topical_categories = true,
}
labels["climate change"] = {
display = "[[climate change]]",
topical_categories = true,
}
labels["climbing"] = {
aliases = {"rock climbing"},
display = "[[climbing#Noun|climbing]]",
topical_categories = true,
}
labels["clinical psychology"] = {
display = "[[clinical]] [[psychology]]",
topical_categories = true,
}
labels["clothing"] = {
display = "[[clothing#Noun|clothing]]",
topical_categories = true,
}
labels["cloud computing"] = {
display = "[[cloud computing]]",
topical_categories = "Computing",
}
labels["collectible card games"] = {
aliases = {"trading card games", "collectible cards", "trading cards"},
display = "collectible card games",
topical_categories = true,
}
labels["combinatorics"] = {
display = "[[combinatorics]]",
topical_categories = true,
}
labels["comedy"] = {
display = "[[comedy]]",
topical_categories = true,
}
labels["commercial law"] = {
display = "[[commercial#Adjective|commercial]] [[law]]",
topical_categories = true,
}
labels["comics"] = {
display = "[[comics]]",
topical_categories = true,
}
labels["communication"] = {
aliases = {"communications"},
display = "[[communication]]",
topical_categories = true,
}
labels["communism"] = {
aliases = {"Communism"},
display = "[[communism]]",
topical_categories = true,
}
labels["compilation"] = {
aliases = {"compiler"},
display = "[[software]] [[compilation]]",
topical_categories = true,
}
labels["complex analysis"] = {
display = "[[complex analysis]]",
topical_categories = true,
}
labels["computational linguistics"] = {
display = "[[computational linguistics]]",
topical_categories = true,
}
labels["computer chess"] = {
display = "[[computer chess]]",
topical_categories = true,
}
labels["computer games"] = {
aliases = {"computer game", "computer gaming"},
display = "[[computer game]]s",
topical_categories = "Video games",
}
labels["computer graphics"] = {
display = "[[computer graphics]]",
topical_categories = true,
}
labels["computer hardware"] = {
display = "[[computer]] [[hardware]]",
topical_categories = true,
}
labels["computer languages"] = {
aliases = {"computer language", "programming language"},
display = "[[computer language]]s",
topical_categories = true,
}
labels["computer science"] = {
aliases = {"comp sci", "CompSci", "compsci"},
display = "[[computer science]]",
topical_categories = true,
}
labels["computer security"] = {
display = "[[computer security]]",
topical_categories = true,
}
labels["computing"] = {
aliases = {"computer", "computers"},
display = "[[computing#Noun|computing]]",
topical_categories = true,
}
labels["computing theory"] = {
aliases = {"comptheory"},
display = "[[computing#Noun|computing]] [[theory]]",
topical_categories = "Theory of computing",
}
labels["conchology"] = {
display = "[[conchology]]",
topical_categories = true,
}
labels["Confucianism"] = {
display = "[[Confucianism]]",
topical_categories = true,
}
labels["conlanging"] = {
aliases = {"constructed languages", "constructed language"},
display = "[[conlanging]]",
topical_categories = true,
}
labels["conservatism"] = {
display = "[[conservatism]]",
topical_categories = true,
}
labels["construction"] = {
display = "[[construction]]",
topical_categories = true,
}
labels["cooking"] = {
aliases = {"culinary", "cuisine", "cookery", "gastronomy"},
display = "[[cooking#Noun|cooking]]",
topical_categories = true,
}
labels["copyright"] = {
aliases = {"copyright law", "intellectual property", "intellectual property law", "IP law"},
display = "[[copyright]] [[law]]",
topical_categories = true,
}
labels["cosmetics"] = {
aliases = {"cosmetology"},
display = "[[cosmetics]]",
topical_categories = true,
}
labels["cosmology"] = {
display = "[[cosmology]]",
topical_categories = true,
}
labels["creationism"] = {
aliases = {"baraminology"},
display = "[[creationism#English|creationism]]",
topical_categories = true,
}
labels["cribbage"] = {
display = "[[cribbage]]",
topical_categories = true,
}
labels["cricket"] = {
display = "[[cricket]]",
topical_categories = true,
}
labels["crime"] = {
display = "[[crime]]",
topical_categories = true,
}
labels["criminal law"] = {
display = "[[criminal law]]",
topical_categories = true,
}
labels["criminology"] = {
display = "[[criminology]]",
topical_categories = true,
}
labels["croquet"] = {
display = "[[croquet]]",
topical_categories = true,
}
labels["cryptocurrencies"] = {
aliases = {"cryptocurrency"},
display = "[[cryptocurrency|cryptocurrencies]]",
topical_categories = "Cryptocurrency",
}
labels["cryptography"] = {
display = "[[cryptography]]",
topical_categories = true,
}
labels["cryptozoology"] = {
display = "[[cryptozoology]]",
topical_categories = true,
}
labels["crystallography"] = {
display = "[[crystallography]]",
topical_categories = true,
}
labels["cultural anthropology"] = {
display = "[[cultural anthropology]]",
topical_categories = true,
}
labels["curling"] = {
display = "[[curling]]",
topical_categories = true,
}
labels["cybernetics"] = {
display = "[[cybernetics]]",
topical_categories = true,
}
labels["cycle racing"] = {
display = "[[w:cycle sport|cycle racing]]",
topical_categories = true,
}
labels["cycling"] = {
aliases = {"bicycling"},
display = "[[cycling#Noun|cycling]]",
topical_categories = true,
}
labels["cytology"] = {
display = "[[cytology]]",
topical_categories = true,
}
labels["dance"] = {
aliases = {"dancing"},
display = "[[dance#Noun|dance]]",
topical_categories = true,
}
labels["darts"] = {
display = "[[darts]]",
topical_categories = true,
}
labels["data management"] = {
display = "[[data management]]",
topical_categories = true,
}
labels["data modeling"] = {
display = "[[data modeling]]",
topical_categories = true,
}
labels["databases"] = {
aliases = {"database"},
display = "[[database]]s",
topical_categories = true,
}
labels["decision theory"] = {
display = "[[decision theory]]",
topical_categories = true,
}
labels["deltiology"] = {
display = "[[deltiology]]",
topical_categories = true,
}
labels["demography"] = {
display = "[[demography]]",
topical_categories = true,
}
labels["demoscene"] = {
topical_categories = true,
}
labels["dentistry"] = {
display = "[[dentistry]]",
topical_categories = true,
}
labels["dermatology"] = {
display = "[[dermatology]]",
topical_categories = true,
}
labels["design"] = {
display = "[[design#Noun|design]]",
topical_categories = true,
}
labels["dice games"] = {
aliases = {"dice"},
display = "[[dice game]]s",
topical_categories = true,
}
labels["dictation"] = {
display = "[[dictation]]",
topical_categories = true,
}
labels["differential geometry"] = {
display = "[[differential geometry]]",
topical_categories = true,
}
labels["diplomacy"] = {
display = "[[diplomacy]]",
topical_categories = true,
}
labels["disc golf"] = {
display = "[[disc golf]]",
topical_categories = true,
}
labels["divination"] = {
display = "[[divination]]",
topical_categories = true,
}
labels["diving"] = {
display = "[[diving#Noun|diving]]",
topical_categories = true,
}
labels["dominoes"] = {
display = "[[dominoes]]",
topical_categories = true,
}
labels["dou dizhu"] = {
display = "[[w:Dou dizhu|dou dizhu]]",
topical_categories = true,
}
labels["drama"] = {
display = "[[drama]]",
topical_categories = true,
}
labels["dressage"] = {
display = "[[dressage]]",
topical_categories = true,
}
labels["earth science"] = {
display = "[[earth science]]",
topical_categories = "Earth sciences",
}
labels["Eastern Catholicism"] = {
aliases = {"Eastern Catholic"},
display = "[[w:Eastern Catholic Churches|Eastern Catholicism]]",
topical_categories = true,
}
labels["Eastern Orthodoxy"] = {
aliases = {"Eastern Orthodox"},
display = "[[Eastern Orthodoxy]]",
topical_categories = true,
}
labels["eating disorders"] = {
aliases = {"eating disorder"},
display = "[[eating disorder]]s",
topical_categories = true,
}
labels["ecclesiastical"] = {
display = "[[ecclesiastical]]",
topical_categories = "Christianity",
}
labels["ecology"] = {
display = "[[ecology]]",
topical_categories = true,
}
labels["economics"] = {
display = "[[economics]]",
topical_categories = true,
}
labels["education"] = {
display = "[[education]]",
topical_categories = true,
}
labels["Egyptian god"] = {
aliases = {"Egyptian goddess", "Egyptian deity"},
display = "[[Egyptian]] [[mythology]]",
topical_categories = "Egyptian deities",
}
labels["Egyptian mythology"] = {
display = "[[Egyptian]] [[mythology]]",
topical_categories = true,
}
labels["Egyptology"] = {
display = "[[Egyptology]]",
topical_categories = "Ancient Egypt",
}
labels["electrencephalography"] = {
display = "[[electrencephalography]]",
topical_categories = true,
}
labels["electrical engineering"] = {
display = "[[electrical engineering]]",
topical_categories = true,
}
labels["electricity"] = {
display = "[[electricity]]",
topical_categories = true,
}
labels["electrodynamics"] = {
display = "[[electrodynamics]]",
topical_categories = true,
}
labels["electromagnetism"] = {
display = "[[electromagnetism]]",
topical_categories = true,
}
labels["electronics"] = {
display = "[[electronics]]",
topical_categories = true,
}
labels["embryology"] = {
display = "[[embryology]]",
topical_categories = true,
}
labels["emergency services"] = {
display = "[[emergency services]]",
topical_categories = true,
}
labels["emergency medicine"] = {
display = "[[emergency medicine]]",
topical_categories = true,
}
labels["endocrinology"] = {
display = "[[endocrinology]]",
topical_categories = true,
}
labels["engineering"] = {
display = "[[engineering#Noun|engineering]]",
topical_categories = true,
}
labels["enterprise engineering"] = {
display = "[[enterprise engineering]]",
topical_categories = true,
}
labels["entomology"] = {
display = "[[entomology]]",
topical_categories = true,
}
labels["epidemiology"] = {
display = "[[epidemiology]]",
topical_categories = true,
}
labels["epistemology"] = {
display = "[[epistemology]]",
topical_categories = true,
}
labels["equestrianism"] = {
aliases = {"equestrian", "horses", "horsemanship"},
display = "[[equestrianism]]",
topical_categories = true,
}
labels["espionage"] = {
display = "[[espionage]]",
topical_categories = true,
}
labels["ethics"] = {
display = "[[ethics]]",
topical_categories = true,
}
labels["ethnography"] = {
display = "[[ethnography]]",
topical_categories = true,
}
labels["ethology"] = {
display = "[[ethology]]",
topical_categories = true,
}
labels["European folklore"] = {
display = "[[European]] [[folklore]]",
topical_categories = true,
}
labels["European Union"] = {
aliases = {"EU"},
display = "[[European Union]]",
topical_categories = true,
}
labels["evolutionary theory"] = {
aliases = {"evolutionary biology"},
display = "[[evolutionary theory]]",
topical_categories = true,
}
labels["exercise"] = {
display = "[[exercise]]",
topical_categories = true,
}
labels["eye color"] = {
display = "[[eye]] [[color]]",
topical_categories = "Eye colors",
}
labels["falconry"] = {
display = "[[falconry]]",
topical_categories = true,
}
labels["fantasy"] = {
display = "[[fantasy]]",
topical_categories = true,
}
labels["farriery"] = {
display = "[[farriery]]",
topical_categories = true,
}
labels["fascism"] = {
display = "[[fascism]]",
topical_categories = true,
}
labels["fashion"] = {
display = "[[fashion]]",
topical_categories = true,
}
labels["feminism"] = {
display = "[[feminism]]",
topical_categories = true,
}
labels["fencing"] = {
display = "[[fencing#Noun|fencing]]",
topical_categories = true,
}
labels["feudalism"] = {
display = "[[feudalism|feudalism]]",
topical_categories = true,
}
labels["fiction"] = {
aliases = {"fictional"},
display = "[[fiction]]",
topical_categories = true,
}
labels["field hockey"] = {
display = "[[field hockey]]",
topical_categories = true,
}
labels["figure skating"] = {
display = "[[figure skating]]",
topical_categories = true,
}
labels["file format"] = {
display = "[[file format]]",
topical_categories = "File formats",
}
labels["film"] = {
display = "[[film#Noun|film]]",
topical_categories = true,
}
labels["film genre"] = {
aliases = {"cinema"},
display = "[[film#Noun|film]]",
topical_categories = "Film genres",
}
labels["finance"] = {
display = "[[finance#Noun|finance]]",
topical_categories = true,
}
labels["Finnic mythology"] = {
aliases = {"Finnish mythology"},
display = "[[Finnic]] [[mythology]]",
topical_categories = true,
}
labels["firearms"] = {
aliases = {"firearm"},
display = "[[firearm]]s",
topical_categories = true,
}
labels["firefighting"] = {
display = "[[firefighting]]",
topical_categories = true,
}
labels["fishing"] = {
aliases = {"angling"},
display = "[[fishing#Noun|fishing]]",
topical_categories = true,
}
labels["flamenco"] = {
display = "[[flamenco]]",
topical_categories = true,
}
labels["fluid dynamics"] = {
display = "[[fluid dynamics]]",
topical_categories = true,
}
labels["fluid mechanics"] = {
display = "[[fluid mechanics]]",
topical_categories = "Mechanics",
}
labels["folklore"] = {
display = "[[folklore]]",
topical_categories = true,
}
labels["forestry"] = {
display = "[[forestry]]",
topical_categories = true,
}
labels["Forteana"] = {
display = "[[Forteana]]",
topical_categories = true,
}
labels["Freemasonry"] = {
aliases = {"freemasonry"},
display = "[[Freemasonry]]",
topical_categories = true,
}
labels["functional analysis"] = {
display = "[[functional analysis]]",
topical_categories = true,
}
labels["furniture"] = {
display = "[[furniture]]",
topical_categories = true,
}
labels["furry fandom"] = {
display = "[[furry#Noun|furry]] [[fandom]]",
topical_categories = true,
}
labels["fuzzy logic"] = {
display = "[[fuzzy logic]]",
topical_categories = true,
}
labels["Gaelic football"] = {
display = "[[Gaelic football]]",
topical_categories = true,
}
labels["gambling"] = {
display = "[[gambling#Noun|gambling]]",
topical_categories = true,
}
labels["game theory"] = {
display = "[[game theory]]",
topical_categories = true,
}
labels["games"] = {
aliases = {"game"},
display = "[[game#Noun|games]]",
topical_categories = true,
}
labels["gaming"] = {
display = "[[gaming#Noun|gaming]]",
topical_categories = true,
}
labels["genealogy"] = {
display = "[[genealogy]]",
topical_categories = true,
}
labels["general semantics"] = {
display = "[[general semantics]]",
topical_categories = true,
}
labels["genetics"] = {
display = "[[genetics]]",
topical_categories = true,
}
labels["geography"] = {
display = "[[geography]]",
topical_categories = true,
}
labels["geology"] = {
display = "[[geology]]",
topical_categories = true,
}
labels["geological period"] = {
Wikipedia = "Geological period",
topical_categories = "Geological periods",
}
labels["geometry"] = {
display = "[[geometry]]",
topical_categories = true,
}
labels["geomorphology"] = {
display = "[[geomorphology]]",
topical_categories = true,
}
labels["geopolitics"] = {
display = "[[geopolitics]]",
topical_categories = true,
}
labels["gerontology"] = {
display = "[[gerontology]]",
topical_categories = true,
}
labels["glassblowing"] = {
display = "[[glassblowing]]",
topical_categories = true,
}
labels["Gnosticism"] = {
aliases = {"gnosticism"},
display = "[[Gnosticism]]",
topical_categories = true,
}
labels["go"] = {
aliases = {"Go", "game of go", "game of Go"},
display = "{{l|en|go|id=game}}",
topical_categories = true,
}
labels["golf"] = {
display = "[[golf]]",
topical_categories = true,
}
labels["government"] = {
display = "[[government]]",
topical_categories = true,
}
labels["grammar"] = {
display = "[[grammar]]",
topical_categories = true,
}
labels["grammatical case"] = {
display = "[[grammar]]",
topical_categories = "Grammatical cases",
}
labels["grammatical mood"] = {
display = "[[grammar]]",
topical_categories = "Grammatical moods",
}
labels["graph theory"] = {
display = "[[graph theory]]",
topical_categories = true,
}
labels["graphic design"] = {
display = "[[graphic design]]",
topical_categories = true,
}
labels["graphical user interface"] = {
aliases = {"GUI"},
display = "[[graphical user interface]]",
topical_categories = true,
}
labels["Greek mythology"] = {
display = "[[Greek]] [[mythology]]",
topical_categories = true,
}
labels["group theory"] = {
display = "[[group theory]]",
topical_categories = true,
}
labels["gun mechanisms"] = {
aliases = {"firearm mechanism", "firearm mechanisms", "gun mechanism"},
display = "[[firearm]]s",
topical_categories = true,
}
labels["gun sports"] = {
aliases = {"shooting sports"},
display = "[[gun]] [[sport]]s",
topical_categories = true,
}
labels["gymnastics"] = {
display = "[[gymnastics]]",
topical_categories = true,
}
labels["gynaecology"] = {
aliases = {"gynecology"},
display = "[[gynaecology]]",
topical_categories = true,
}
labels["hair color"] = {
display = "[[hair]] [[color]]",
topical_categories = "Hair colors",
}
labels["hairdressing"] = {
display = "[[hairdressing]]",
topical_categories = true,
}
labels["handball"] = {
display = "[[handball]]",
topical_categories = true,
}
labels["Hawaiian mythology"] = {
display = "[[Hawaiian]] [[mythology]]",
topical_categories = true,
}
labels["headwear"] = {
display = "[[clothing#Noun|clothing]]",
topical_categories = true,
}
labels["healthcare"] = {
display = "[[healthcare]]",
topical_categories = true,
}
labels["helminthology"] = {
display = "[[helminthology]]",
topical_categories = true,
}
labels["hematology"] = {
aliases = {"haematology"},
display = "[[hematology]]",
topical_categories = true,
}
labels["heraldry"] = {
display = "[[heraldry]]",
topical_categories = true,
}
labels["herbalism"] = {
display = "[[herbalism]]",
topical_categories = true,
}
labels["herpetology"] = {
display = "[[herpetology]]",
topical_categories = true,
}
labels["Hinduism"] = {
display = "[[Hinduism]]",
topical_categories = true,
}
labels["Hindutva"] = {
display = "[[Hindutva]]",
topical_categories = true,
}
labels["historiography"] = {
display = "[[historiography]]",
topical_categories = true,
}
labels["history"] = {
display = "[[history]]",
topical_categories = true,
}
labels["historical linguistics"] = {
display = "[[historical linguistics]]",
topical_categories = "Linguistics",
}
labels["hockey"] = {
display = "[[field hockey]] or [[ice hockey]]",
topical_categories = {"Field hockey", "Ice hockey"},
}
labels["homeopathy"] = {
display = "[[homeopathy]]",
topical_categories = true,
}
labels["horse color"] = {
display = "[[horse]] [[color]]",
topical_categories = "Horse colors",
}
labels["horse racing"] = {
display = "[[horse racing]]",
topical_categories = true,
}
labels["horticulture"] = {
aliases = {"gardening"},
display = "[[horticulture]]",
topical_categories = true,
}
labels["HTML"] = {
display = "[[Hypertext Markup Language|HTML]]",
topical_categories = true,
}
labels["human resources"] = {
display = "[[human resources]]",
topical_categories = true,
}
labels["humanities"] = {
display = "[[humanities]]",
topical_categories = true,
}
labels["hunting"] = {
display = "[[hunting#Noun|hunting]]",
topical_categories = true,
}
labels["hurling"] = {
display = "[[hurling#Noun|hurling]]",
topical_categories = true,
}
labels["hydroacoustics"] = {
Wikipedia = "Hydroacoustics",
topical_categories = true,
}
labels["hydrology"] = {
display = "[[hydrology]]",
topical_categories = true,
}
labels["ice hockey"] = {
display = "[[ice hockey]]",
topical_categories = true,
}
labels["ichthyology"] = {
display = "[[ichthyology]]",
topical_categories = true,
}
labels["idol fandom"] = {
display = "[[idol]] [[fandom]]",
topical_categories = true,
}
labels["immunochemistry"] = {
display = "[[immunochemistry]]",
topical_categories = true,
}
labels["immunology"] = {
display = "[[immunology]]",
topical_categories = true,
}
labels["import/export"] = {
display = "[[import#Noun|import]]/[[export#Noun|export]]",
topical_categories = true,
}
labels["Indo-European studies"] = {
aliases = {"indo-european studies"},
display = "[[Indo-European studies]]",
topical_categories = true,
}
labels["information science"] = {
display = "[[information science]]",
topical_categories = true,
}
labels["information theory"] = {
display = "[[information theory]]",
topical_categories = true,
}
labels["information technology"] = {
aliases = {"IT"},
display = "[[information technology]]",
topical_categories = "Computing",
}
labels["inheritance law"] = {
display = "[[inheritance law]]",
topical_categories = true,
}
labels["inorganic chemistry"] = {
display = "[[inorganic chemistry]]",
topical_categories = true,
}
labels["insurance"] = {
display = "[[insurance]]",
topical_categories = true,
}
labels["international law"] = {
display = "[[international law]]",
topical_categories = true,
}
labels["international relations"] = {
display = "[[international relations]]",
topical_categories = true,
}
labels["international standards"] = {
aliases = {"international standard", "ISO", "International Organization for Standardization", "International Organisation for Standardisation"},
Wikipedia = "International standard",
}
labels["Internet"] = {
aliases = {"internet", "online"},
display = "[[Internet]]",
topical_categories = true,
}
labels["Iranian mythology"] = {
display = "[[Iranian]] [[mythology]]",
topical_categories = true,
}
labels["Irish mythology"] = {
display = "[[Irish]] [[mythology]]",
topical_categories = true,
}
labels["Islam"] = {
aliases = {"islam", "Islamic", "Muslim"},
Wikipedia = "Islam",
topical_categories = true,
}
labels["Islamic finance"] = {
aliases = {"Islamic banking", "Muslim finance", "Muslim banking", "Sharia-compliant finance"},
Wikipedia = "Islamic finance",
topical_categories = true,
}
labels["Islamic law"] = {
aliases = {"Islamic legal", "Sharia"},
Wikipedia = "Sharia",
topical_categories = true,
}
labels["Jainism"] = {
display = "[[Jainism]]",
topical_categories = true,
}
labels["Japanese god"] = {
display = "[[Japanese]] [[mythology]]",
topical_categories = "Japanese deities",
}
labels["Japanese mythology"] = {
display = "[[Japanese]] [[mythology]]",
topical_categories = true,
}
labels["Java programming language"] = {
aliases = {"JavaPL", "Java PL"},
display = "[[w:Java (programming language)|Java programming language]]",
topical_categories = true,
}
labels["jazz"] = {
display = "[[jazz#Noun|jazz]]",
topical_categories = true,
}
labels["jewelry"] = {
aliases = {"jewellery"},
display = "[[jewelry]]",
topical_categories = true,
}
labels["Jewish law"] = {
aliases = {"Halacha", "Halachah", "Halakha", "Halakhah", "halacha", "halachah", "halakha", "halakhah", "Jewish Law", "jewish law"},
display = "[[Jewish]] [[law]]",
topical_categories = true,
}
labels["Germanic paganism"] = {
aliases = {"Asatru", "Ásatrú", "Germanic neopaganism", "Germanic Paganism", "Heathenry", "heathenry", "Norse neopaganism", "Norse paganism"},
display = "[[Germanic#Adjective|Germanic]] [[paganism]]",
topical_categories = true,
}
labels["journalism"] = {
display = "[[journalism]]",
topical_categories = "Mass media",
}
labels["Judaism"] = {
display = "[[Judaism]]",
topical_categories = true,
}
labels["judo"] = {
display = "[[judo]]",
topical_categories = true,
}
labels["juggling"] = {
display = "[[juggling#Noun|juggling]]",
topical_categories = true,
}
labels["karuta"] = {
display = "[[karuta]]",
topical_categories = true,
}
labels["kendo"] = {
display = "[[kendo]]",
topical_categories = true,
}
labels["knitting"] = {
display = "[[knitting#Noun|knitting]]",
topical_categories = true,
}
labels["labour"] = {
aliases = {"labor", "labour movement", "labor movement"},
display = "[[labour]]",
topical_categories = true,
}
labels["lacrosse"] = {
display = "[[lacrosse]]",
topical_categories = true,
}
labels["law"] = {
aliases = {"legal"},
display = "[[law#English|law]]",
topical_categories = true,
}
labels["law enforcement"] = {
aliases = {"police", "policing"},
display = "[[law enforcement]]",
topical_categories = true,
}
labels["leftism"] = {
display = "[[leftism]]",
topical_categories = true,
}
labels["letterpress"] = {
aliases = {"metal type", "metal typesetting"},
display = "[[letterpress]] [[typography]]",
topical_categories = "Typography",
}
labels["lexicography"] = {
display = "[[lexicography]]",
topical_categories = true,
}
labels["LGBTQ"] = {
aliases = {"LGBT", "LGBT+", "LGBT*", "LGBTQ+", "LGBTQ*", "LGBTQIA", "LGBTQIA+", "LGBTQIA*"},
display = "[[LGBTQ]]",
topical_categories = true,
}
labels["liberalism"] = {
display = "[[liberalism]]",
topical_categories = true,
}
labels["library science"] = {
display = "[[library science]]",
topical_categories = true,
}
labels["lichenology"] = {
display = "[[lichenology]]",
topical_categories = true,
}
labels["limnology"] = {
display = "[[limnology]]",
topical_categories = "Ecology",
}
labels["lipid"] = {
display = "[[biochemistry]]",
topical_categories = "Lipids",
}
labels["linear algebra"] = {
aliases = {"vector algebra"},
display = "[[linear algebra]]",
topical_categories = true,
}
labels["linguistic morphology"] = {
display = "[[linguistic]] [[morphology]]",
topical_categories = true,
}
labels["linguistics"] = {
aliases = {"philology"},
display = "[[linguistics]]",
topical_categories = true,
}
labels["literature"] = {
display = "[[literature]]",
topical_categories = true,
}
labels["logic"] = {
display = "[[logic]]",
topical_categories = true,
}
labels["logistics"] = {
display = "[[logistics]]",
topical_categories = true,
}
labels["luge"] = {
display = "[[luge]]",
topical_categories = true,
}
labels["machining"] = {
display = "[[machining#Noun|machining]]",
topical_categories = true,
}
labels["machine learning"] = {
aliases = {"ML"},
display = "[[machine learning]]",
topical_categories = true,
}
labels["macroeconomics"] = {
display = "[[macroeconomics]]",
topical_categories = "Economics",
}
labels["mahjong"] = {
display = "[[mahjong]]",
topical_categories = true,
}
labels["malacology"] = {
display = "[[malacology]]",
topical_categories = true,
}
labels["mammalogy"] = {
display = "[[mammalogy]]",
topical_categories = true,
}
labels["management"] = {
display = "[[management]]",
topical_categories = true,
}
labels["manga"] = {
display = "[[manga]]",
topical_categories = "Japanese fiction",
}
labels["manhua"] = {
display = "[[manhua]]",
topical_categories = "Chinese fiction",
}
labels["manhwa"] = {
display = "[[manhwa]]",
topical_categories = "Korean fiction",
}
labels["Manichaeism"] = {
display = "[[Manichaeism]]",
topical_categories = true,
}
labels["manufacturing"] = {
display = "[[manufacturing#Noun|manufacturing]]",
topical_categories = true,
}
labels["Maoism"] = {
display = "[[Maoism]]",
topical_categories = true,
}
labels["marching"] = {
display = "[[marching#Noun|marching]]",
topical_categories = true,
}
labels["marine biology"] = {
aliases = {"coral science"},
display = "[[marine biology]]",
topical_categories = true,
}
labels["marketing"] = {
display = "[[marketing#Noun|marketing]]",
topical_categories = true,
}
labels["martial arts"] = {
display = "[[martial arts]]",
topical_categories = true,
}
labels["Marxism"] = {
display = "[[Marxism]]",
topical_categories = true,
}
labels["masonry"] = {
display = "[[masonry]]",
topical_categories = true,
}
labels["massage"] = {
display = "[[massage]]",
topical_categories = true,
}
labels["materials science"] = {
display = "[[materials science]]",
topical_categories = true,
}
labels["mathematical analysis"] = {
aliases = {"analysis"},
display = "[[mathematical analysis]]",
topical_categories = true,
}
labels["mathematics"] = {
aliases = {"math", "maths"},
display = "[[mathematics]]",
topical_categories = true,
}
labels["measure theory"] = {
display = "[[measure theory]]",
topical_categories = true,
}
labels["mechanical engineering"] = {
display = "[[mechanical engineering]]",
topical_categories = true,
}
labels["mechanics"] = {
display = "[[mechanics]]",
topical_categories = true,
}
labels["media"] = {
display = "[[media]]",
topical_categories = true,
}
labels["mediaeval folklore"] = {
aliases = {"medieval folklore"},
display = "[[mediaeval]] [[folklore]]",
topical_categories = "European folklore",
}
labels["medical genetics"] = {
display = "[[medical]] [[genetics]]",
topical_categories = true,
}
labels["medical sign"] = {
display = "[[medicine]]",
topical_categories = "Medical signs and symptoms",
}
labels["medicine"] = {
aliases = {"medical"},
display = "[[medicine]]",
topical_categories = true,
}
labels["Meitei god"] = {
display = "[[Meitei]] [[mythology]]",
topical_categories = "Meitei deities",
}
labels["mental health"] = {
display = "[[mental health]]",
topical_categories = true,
}
labels["Mesopotamian mythology"] = {
display = "[[Mesopotamian]] [[mythology]]",
topical_categories = true,
}
labels["metadata"] = {
display = "[[metadata]]",
topical_categories = "Data management",
}
labels["metallurgy"] = {
display = "[[metallurgy]]",
topical_categories = true,
}
labels["metalworking"] = {
display = "[[metalworking]]",
topical_categories = true,
}
labels["metaphysics"] = {
display = "[[metaphysics]]",
topical_categories = true,
}
labels["meteorology"] = {
display = "[[meteorology]]",
topical_categories = true,
}
labels["Methodism"] = {
aliases = {"Methodist", "methodism", "methodist"},
display = "[[Methodism]]",
topical_categories = true,
}
labels["metrology"] = {
display = "[[metrology]]",
topical_categories = true,
}
labels["microbiology"] = {
display = "[[microbiology]]",
topical_categories = true,
}
labels["microelectronics"] = {
display = "[[microelectronics]]",
topical_categories = true,
}
labels["micronationalism"] = {
display = "[[micronationalism]]",
topical_categories = true,
}
labels["microscopy"] = {
display = "[[microscopy]]",
topical_categories = true,
}
labels["military"] = {
display = "[[military]]",
topical_categories = true,
}
labels["mineralogy"] = {
display = "[[mineralogy]]",
topical_categories = true,
}
labels["mining"] = {
display = "[[mining#Noun|mining]]",
topical_categories = true,
}
labels["molecular biology"] = {
display = "[[molecular biology]]",
topical_categories = true,
}
labels["monarchy"] = {
display = "[[monarchy]]",
topical_categories = true,
}
labels["money"] = {
display = "[[money]]",
topical_categories = true,
}
labels["Mormonism"] = {
display = "[[Mormonism]]",
topical_categories = true,
}
labels["motorcycling"] = {
aliases = {"motorcycle", "motorcycles", "motorbike"},
display = "[[motorcycling#Noun|motorcycling]]",
topical_categories = "Motorcycles",
}
-- There are other types of racing, but 99% of the time "racing" on its own refers to motorsports
labels["motor racing"] = {
aliases = {"motor sport", "motorsport", "motorsports", "racing"},
display = "[[motor racing]]",
topical_categories = true,
}
labels["multiplicity"] = {
display = "{{l|en|multiplicity|id=multiple personalities}}",
topical_categories = "Multiplicity (psychology)",
}
labels["music"] = {
display = "[[music]]",
topical_categories = true,
}
labels["music industry"] = {
Wikipedia = "Music industry",
topical_categories = true,
}
labels["mycology"] = {
display = "[[mycology]]",
topical_categories = true,
}
labels["mythology"] = {
display = "[[mythology]]",
topical_categories = true,
}
labels["nanotechnology"] = {
display = "[[nanotechnology]]",
topical_categories = true,
}
labels["narratology"] = {
display = "[[narratology]]",
topical_categories = true,
}
labels["nautical"] = {
display = "[[nautical]]",
topical_categories = true,
}
labels["navigation"] = {
display = "[[navigation]]",
topical_categories = true,
}
labels["Nazism"] = { -- see also Neo-Nazism
aliases = {"nazism", "Nazi", "nazi", "Nazis", "nazis"},
Wikipedia = "Nazism",
topical_categories = true,
}
labels["nematology"] = {
display = "[[nematology]]",
topical_categories = "Zoology",
}
labels["neo-Nazism"] = { -- but also this is often used to indicate Nazi-used jargon; cf "white supremacist ideology"
aliases = {"Neo-Nazism", "Neo-nazism", "neo-nazism", "Neo-Nazi", "Neo-nazi", "neo-Nazi", "neo-nazi", "Neo-Nazis", "Neo-nazis", "neo-Nazis", "neo-nazis", "NeoNazism", "Neonazism", "neoNazism", "neonazism", "NeoNazi", "Neonazi", "neoNazi", "neonazi", "NeoNazis", "Neonazis", "neoNazis", "neonazis"},
Wikipedia = "Neo-Nazism",
topical_categories = true,
}
labels["netball"] = {
display = "[[netball]]",
topical_categories = true,
}
labels["networking"] = {
display = "[[networking#Noun|networking]]",
topical_categories = true,
}
labels["neuroanatomy"] = {
display = "[[neuroanatomy]]",
topical_categories = true,
}
labels["neurology"] = {
display = "[[neurology]]",
topical_categories = true,
}
labels["neuroscience"] = {
display = "[[neuroscience]]",
topical_categories = true,
}
labels["neurosurgery"] = {
display = "[[neurosurgery]]",
topical_categories = true,
}
labels["newspapers"] = {
display = "[[newspaper]]s",
topical_categories = true,
}
labels["Norse god"] = {
aliases = {"Norse goddess", "Norse deity"},
display = "[[Norse]] [[mythology]]",
topical_categories = "Norse deities",
}
labels["Norse mythology"] = {
display = "[[Norse]] [[mythology]]",
topical_categories = true,
}
labels["nuclear physics"] = {
display = "[[nuclear physics]]",
topical_categories = true,
}
labels["number theory"] = {
display = "[[number theory]]",
topical_categories = true,
}
labels["numismatics"] = {
display = "[[numismatics]]",
topical_categories = "Currency",
}
labels["nutrition"] = {
display = "[[nutrition]]",
topical_categories = true,
}
labels["object-oriented programming"] = {
aliases = {"object-oriented", "OOP"},
display = "[[object-oriented programming]]",
topical_categories = true,
}
labels["obstetrics"] = {
aliases = {"obstetric"},
display = "[[obstetrics]]",
topical_categories = true,
}
labels["occult"] = {
display = "[[occult]]",
topical_categories = true,
}
labels["oceanography"] = {
display = "[[oceanography]]",
topical_categories = true,
}
labels["oenology"] = {
display = "[[oenology]]",
topical_categories = true,
}
labels["oil industry"] = {
aliases = {"oil drilling"},
display = "[[w:Petroleum industry|oil industry]]",
topical_categories = true,
}
labels["oncology"] = {
display = "[[oncology]]",
topical_categories = true,
}
labels["online gaming"] = {
aliases = {"online games", "MMO", "MMORPG"},
display = "[[online]] [[gaming#Noun|gaming]]",
topical_categories = "Video games",
}
labels["opera"] = {
display = "[[opera]]",
topical_categories = true,
}
labels["operating systems"] = {
display = "[[operating system]]s",
topical_categories = "Software",
}
labels["ophthalmology"] = {
display = "[[ophthalmology]]",
topical_categories = true,
}
labels["optics"] = {
display = "[[optics]]",
topical_categories = true,
}
labels["organic chemistry"] = {
display = "[[organic chemistry]]",
topical_categories = true,
}
labels["ornithology"] = {
display = "[[ornithology]]",
topical_categories = true,
}
labels["orthodontics"] = {
display = "[[orthodontics]]",
topical_categories = "Dentistry",
}
labels["orthography"] = {
display = "[[orthography]]",
topical_categories = true,
}
labels["paganism"] = {
aliases = {"pagan", "neopagan", "neopaganism", "neo-pagan", "neo-paganism"},
display = "[[paganism]]",
topical_categories = true,
}
labels["pain"] = {
display = "[[medicine]]",
topical_categories = true,
}
labels["paintball"] = {
display = "[[paintball]]",
topical_categories = true,
}
labels["painting"] = {
display = "[[painting#Noun|painting]]",
topical_categories = true,
}
labels["palaeography"] = {
aliases = {"paleography"},
display = "[[palaeography]]",
topical_categories = true,
}
labels["paleontology"] = {
aliases = {"palaeontology"},
display = "[[paleontology]]",
topical_categories = true,
}
labels["palmistry"] = {
display = "[[palmistry]]",
topical_categories = true,
}
labels["palynology"] = {
display = "[[palynology]]",
topical_categories = true,
}
labels["parapsychology"] = {
display = "[[parapsychology]]",
topical_categories = true,
}
labels["parasitology"] = {
display = "[[parasitology]]",
topical_categories = true,
}
labels["particle physics"] = {
display = "[[particle physics]]",
topical_categories = true,
}
labels["pasteurisation"] = {
display = "[[pasteurisation]]",
topical_categories = true,
}
labels["patent law"] = {
aliases = {"patents"},
display = "[[patent#Noun|patent]] [[law]]",
topical_categories = true,
}
labels["pathology"] = {
display = "[[pathology]]",
topical_categories = true,
}
labels["pensions"] = {
display = "[[pension]]s",
topical_categories = true,
}
labels["pesäpallo"] = {
aliases = {"pesapallo"},
display = "[[pesäpallo]]",
topical_categories = true,
}
labels["petrochemistry"] = {
display = "[[petrochemistry]]",
topical_categories = true,
}
labels["petrology"] = {
display = "[[petrology]]",
topical_categories = true,
}
labels["pharmacology"] = {
display = "[[pharmacology]]",
topical_categories = true,
}
labels["pharmacy"] = {
display = "[[pharmacy]]",
topical_categories = true,
}
labels["pharyngology"] = {
display = "[[pharyngology]]",
topical_categories = true,
}
labels["philately"] = {
display = "[[philately]]",
topical_categories = true,
}
labels["philosophy"] = {
display = "[[philosophy]]",
topical_categories = true,
}
labels["phonetics"] = {
display = "[[phonetics]]",
topical_categories = true,
}
labels["phonology"] = {
display = "[[phonology]]",
topical_categories = true,
}
labels["photography"] = {
display = "[[photography]]",
topical_categories = true,
}
labels["phrenology"] = {
display = "[[phrenology]]",
topical_categories = true,
}
labels["physical chemistry"] = {
display = "[[physical chemistry]]",
topical_categories = true,
}
labels["physics"] = {
display = "[[physics]]",
topical_categories = true,
}
labels["physiology"] = {
display = "[[physiology]]",
topical_categories = true,
}
labels["phytopathology"] = {
display = "[[phytopathology]]",
topical_categories = true,
}
labels["pinball"] = {
display = "[[pinball]]",
topical_categories = true,
}
labels["planetology"] = {
display = "[[planetology]]",
topical_categories = true,
}
labels["playground games"] = {
aliases = {"playground game"},
display = "[[playground]] [[game]]s",
topical_categories = true,
}
labels["poetry"] = {
display = "[[poetry]]",
topical_categories = true,
}
labels["Pokémon"] = {
display = "''[[w:Pokémon|Pokémon]]''",
topical_categories = true,
}
labels["poker"] = {
display = "[[poker]]",
topical_categories = true,
}
labels["poker slang"] = {
display = "[[poker]] [[slang]]",
topical_categories = "Poker",
}
labels["political science"] = {
display = "[[political science]]",
topical_categories = true,
}
labels["politics"] = {
aliases = {"political"},
display = "[[politics]]",
topical_categories = true,
}
labels["Australian politics"] = {
display = "[[w:Politics of Australia|Australian politics]]",
topical_categories = true,
}
labels["Canadian politics"] = {
display = "[[w:Politics of Canada|Canadian politics]]",
topical_categories = true,
}
labels["European politics"] = {
display = "[[w:Politics of Europe|European politics]]",
topical_categories = true,
}
labels["EU politics"] = {
display = "[[w:Politics of the European Union|EU politics]]",
topical_categories = true,
}
labels["French politics"] = {
display = "[[w:Politics of France|French politics]]",
topical_categories = true,
}
labels["German politics"] = {
display = "[[w:Politics of Germany|German politics]]",
topical_categories = true,
}
labels["Hong Kong politics"] = {
aliases = {"HK politics"},
display = "[[w:Politics of Hong Kong|HK politics]]",
topical_categories = true,
}
labels["Indian politics"] = {
display = "[[w:Politics of India|Indian politics]]",
topical_categories = true,
}
labels["Indonesian politics"] = {
aliases = {"Indonesia politics"},
display = "[[w:Politics of Indonesia|Indonesian politics]]",
topical_categories = true,
}
labels["Irish politics"] = {
display = "[[w:Politics of the Republic of Ireland|Irish politics]]",
topical_categories = true,
}
labels["Malaysian politics"] = {
aliases = {"Malaysia politics"},
display = "[[w:Politics of Malaysia|Malaysian politics]]",
topical_categories = true,
}
labels["New Zealand politics"] = {
display = "[[w:Politics of New Zealand|New Zealand politics]]",
topical_categories = true,
}
labels["Pakistani politics"] = {
display = "[[w:Politics of Pakistan|Pakistani politics]]",
topical_categories = true,
}
labels["Palestinian politics"] = {
aliases = {"Palestine politics"},
display = "[[w:Politics of the Palestinian National Authority|Palestinian politics]]",
topical_categories = true,
}
labels["Philippine politics"] = {
aliases = {"Filipino politics"},
display = "[[w:Politics of the Philippines|Philippine politics]]",
topical_categories = true,
}
labels["Philmont Scout Ranch"] = {
aliases = {"Philmont"},
display = "[[w:Philmont Scout Ranch|Philmont Scout Ranch]]",
topical_categories = true,
}
labels["Spanish politics"] = {
display = "[[w:Politics of Spain|Spanish politics]]",
topical_categories = true,
}
labels["Swiss politics"] = {
display = "[[w:Politics of Switzerland|Swiss politics]]",
topical_categories = true,
}
labels["UK politics"] = {
display = "[[w:Politics of the United Kingdom|UK politics]]",
topical_categories = true,
}
labels["UN"] = {
display = "[[United Nations|UN]]",
topical_categories = "United Nations",
}
labels["US politics"] = {
display = "[[w:Politics of the United States|US politics]]",
topical_categories = true,
}
labels["pornography"] = {
aliases = {"porn", "porno"},
display = "[[pornography]]",
topical_categories = true,
}
labels["Portuguese folklore"] = {
display = "[[Portuguese#Adjective|Portuguese]] [[folklore]]",
topical_categories = "European folklore",
}
labels["post"] = {
display = "[[post#Etymology 2|post]]",
topical_categories = true,
}
labels["potential theory"] = {
display = "[[potential theory]]",
topical_categories = true,
}
labels["pottery"] = {
display = "[[pottery]]",
topical_categories = "Ceramics",
}
labels["pragmatics"] = {
display = "[[pragmatics]]",
topical_categories = true,
}
labels["printing"] = {
display = "[[printing#Noun|printing]]",
topical_categories = true,
}
labels["probability theory"] = {
display = "[[probability theory]]",
topical_categories = true,
}
labels["professional wrestling"] = {
aliases = {"pro wrestling"},
display = "[[professional wrestling]]",
topical_categories = true,
}
labels["programming"] = {
aliases = {"computer programming"},
display = "[[programming#Noun|programming]]",
topical_categories = true,
}
labels["property law"] = {
aliases = {"land law", "real estate law"},
display = "[[property law]]",
topical_categories = true,
}
labels["prosody"] = {
display = "[[prosody]]",
topical_categories = true,
}
labels["Protestantism"] = {
aliases = {"protestantism", "Protestant", "protestant"},
display = "[[Protestantism]]",
topical_categories = true,
}
labels["pseudoscience"] = {
display = "[[pseudoscience]]",
topical_categories = true,
}
labels["psychiatry"] = {
display = "[[psychiatry]]",
topical_categories = true,
}
labels["psychoanalysis"] = {
display = "[[psychoanalysis]]",
topical_categories = true,
}
labels["psychology"] = {
display = "[[psychology]]",
topical_categories = true,
}
labels["psychotherapy"] = {
display = "[[psychotherapy]]",
topical_categories = true,
}
labels["publishing"] = {
display = "[[publishing#Noun|publishing]]",
topical_categories = true,
}
labels["pulmonology"] = {
display = "[[pulmonology]]",
topical_categories = true,
}
labels["pyrotechnics"] = {
display = "[[pyrotechnics]]",
topical_categories = true,
}
labels["QAnon"] = {
aliases = {"Qanon"},
Wikipedia = "QAnon",
topical_categories = true,
}
labels["Quakerism"] = {
display = "[[Quakerism]]",
topical_categories = true,
}
labels["quantum computing"] = {
display = "[[quantum computing]]",
topical_categories = true,
}
labels["quantum mechanics"] = {
aliases = {"quantum physics"},
display = "[[quantum mechanics]]",
topical_categories = true,
}
-- TODO: What kind of topic is "radiation"? Is it specific kinds of radiation? That would be a set-type category.
labels["radiation"] = {
display = "[[physics]]",
topical_categories = true,
}
labels["radio"] = {
display = "[[radio]]",
topical_categories = true,
}
labels["Raëlism"] = {
display = "[[Raëlism]]",
topical_categories = true,
}
labels["rail transport"] = {
aliases = {"rail", "railroading", "railroads"},
display = "[[rail transport]]",
topical_categories = "Rail transportation",
}
labels["Rastafari"] = {
aliases = {"Rasta", "rasta", "Rastafarian", "rastafarian", "Rastafarianism"},
display = "[[Rastafari]]",
topical_categories = true,
}
labels["real estate"] = {
display = "[[real estate]]",
topical_categories = true,
}
labels["real tennis"] = {
display = "[[real tennis]]",
topical_categories = "Tennis",
}
labels["recreational mathematics"] = {
display = "[[recreational mathematics]]",
topical_categories = "Mathematics",
}
labels["Reddit"] = {
display = "[[Reddit]]",
topical_categories = true,
}
labels["regular expressions"] = {
aliases = {"regex"},
display = "[[regular expression]]s",
topical_categories = true,
}
labels["relativity"] = {
display = "[[relativity]]",
topical_categories = true,
}
labels["religion"] = {
display = "[[religion]]",
topical_categories = true,
}
labels["rhetoric"] = {
display = "[[rhetoric]]",
topical_categories = true,
}
labels["road transport"] = {
aliases = {"roads"},
display = "[[w:road transport|road transport]]",
topical_categories = true,
}
labels["robotics"] = {
display = "[[robotics]]",
topical_categories = true,
}
labels["rock paper scissors"] = {
topical_categories = true,
}
labels["roleplaying games"] = {
aliases = {"role playing games", "role-playing games", "RPG", "RPGs"},
display = "[[roleplaying game]]s",
topical_categories = "Role-playing games",
}
labels["roller derby"] = {
display = "[[roller derby]]",
topical_categories = true,
}
labels["Roman Catholicism"] = {
aliases = {"Roman Catholic", "Roman Catholic Church"},
display = "[[Roman Catholicism]]",
topical_categories = true,
}
labels["Roman Empire"] = {
display = "[[Roman Empire]]",
topical_categories = true,
}
labels["Roman mythology"] = {
display = "[[Roman]] [[mythology]]",
topical_categories = true,
}
labels["Roman numerals"] = {
display = "[[Roman numeral]]s",
topical_categories = true,
}
labels["roofing"] = {
display = "[[roofing#Noun|roofing]]",
topical_categories = true,
}
labels["rosiculture"] = {
display = "[[rosiculture]]",
topical_categories = true,
}
labels["rowing"] = {
display = "[[rowing#Noun|rowing]]",
topical_categories = true,
}
labels["Rubik's Cube"] = {
aliases = {"Rubik's cubes"},
display = "[[Rubik's Cube]]",
topical_categories = true,
}
labels["rugby"] = {
display = "[[rugby]]",
topical_categories = true,
}
labels["rugby league"] = {
display = "[[rugby league]]",
topical_categories = true,
}
labels["rugby union"] = {
display = "[[rugby union]]",
topical_categories = true,
}
labels["sailing"] = {
display = "[[sailing#Noun|sailing]]",
topical_categories = true,
}
labels["science fiction"] = {
aliases = {"scifi", "sci fi", "sci-fi"},
display = "[[science fiction]]",
topical_categories = true,
}
labels["sciences"] = {
aliases = {"science", "scientific"},
display = "[[sciences]]",
topical_categories = true,
}
labels["Scientology"] = {
display = "[[Scientology]]",
topical_categories = true,
}
-- Note: this is the usual term, not "Scottish law".
labels["Scots law"] = {
aliases = {"Scottish law", "Scotland law", "Scots Law", "Scottish Law", "Scotland Law"},
Wikipedia = true,
topical_categories = true,
}
labels["Scouting"] = {
aliases = {"scouting"},
display = "[[scouting]]",
topical_categories = true,
}
labels["Scrabble"] = {
display = "''[[Scrabble]]''",
topical_categories = true,
}
labels["scrapbooks"] = {
display = "[[scrapbook]]s",
topical_categories = true,
}
labels["sculpture"] = {
display = "[[sculpture]]",
topical_categories = true,
}
labels["seduction community"] = {
display = "[[w:Seduction community|seduction community]]",
topical_categories = true,
}
labels["seismology"] = {
display = "[[seismology]]",
topical_categories = true,
}
labels["semantics"] = {
display = "[[semantics]]",
topical_categories = true,
}
labels["semiotics"] = {
display = "[[semiotics]]",
topical_categories = true,
}
labels["semiconductors"] = {
display = "[[semiconductor]]s",
topical_categories = true,
}
labels["set theory"] = {
display = "[[set theory]]",
topical_categories = true,
}
labels["sewing"] = {
display = "[[sewing#Noun|sewing]]",
topical_categories = true,
}
labels["sex"] = {
display = "[[sex]]",
topical_categories = true,
}
labels["sexology"] = {
display = "[[sexology]]",
topical_categories = true,
}
labels["sex position"] = {
display = "[[sex]]",
topical_categories = "Sex positions",
}
labels["sexuality"] = {
display = "[[sexuality]]",
topical_categories = true,
}
labels["Shaivism"] = {
display = "[[Shaivism]]",
topical_categories = true,
}
labels["shamanism"] = {
aliases = {"Shamanism"},
display = "[[shamanism]]",
topical_categories = true,
}
labels["Shi'ism"] = {
aliases = {"Shia", "Shi'ite", "Shi'i"},
display = "[[Shia Islam]]",
topical_categories = true,
}
labels["Shinto"] = {
display = "[[Shinto]]",
topical_categories = true,
}
labels["ship parts"] = {
display = "[[nautical]]",
topical_categories = "Ship parts",
}
labels["shipping"] = {
display = "[[shipping#Noun|shipping]]",
topical_categories = true,
}
labels["shoemaking"] = {
display = "[[shoemaking]]",
topical_categories = true,
}
labels["shogi"] = {
display = "[[shogi]]",
topical_categories = true,
}
labels["signal processing"] = {
display = "[[w:Signal processing|signal processing]]",
topical_categories = true,
}
labels["Sikhism"] = {
display = "[[Sikhism]]",
topical_categories = true,
}
labels["singing"] = {
display = "[[singing#Noun|singing]]",
topical_categories = true,
}
labels["skateboarding"] = {
display = "[[skateboarding#Noun|skateboarding]]",
topical_categories = true,
}
labels["skating"] = {
display = "[[skating#Noun|skating]]",
topical_categories = true,
}
labels["skiing"] = {
display = "[[skiing#Noun|skiing]]",
topical_categories = true,
}
labels["Slavic god"] = {
display = "[[Slavic]] [[mythology]]",
topical_categories = "Slavic deities",
}
labels["Slavic mythology"] = {
display = "[[Slavic]] [[mythology]]",
topical_categories = true,
}
labels["smoking"] = {
display = "[[smoking#Noun|smoking]]",
topical_categories = true,
}
labels["snooker"] = {
display = "[[snooker#Noun|snooker]]",
topical_categories = true,
}
labels["snowboarding"] = {
display = "[[snowboarding#Noun|snowboarding]]",
topical_categories = true,
}
labels["soccer"] = {
aliases = {"football", "association football"},
display = "[[soccer]]",
topical_categories = "Football (soccer)",
}
labels["social sciences"] = {
aliases = {"social science"},
display = "[[social science]]s",
topical_categories = true,
}
labels["socialism"] = {
display = "[[socialism]]",
topical_categories = true,
}
labels["social media"] = {
display = "[[social media]]",
topical_categories = true,
}
labels["sociolinguistics"] = {
display = "[[sociolinguistics]]",
topical_categories = true,
}
labels["sociology"] = {
display = "[[sociology]]",
topical_categories = true,
}
labels["softball"] = {
display = "[[softball]]",
topical_categories = true,
}
labels["software"] = {
display = "[[software]]",
topical_categories = true,
}
labels["software architecture"] = {
display = "[[software architecture]]",
topical_categories = {"Software engineering", "Programming"},
}
labels["software engineering"] = {
aliases = {"software development"},
display = "[[software engineering]]",
topical_categories = true,
}
labels["soil science"] = {
display = "[[soil science]]",
topical_categories = true,
}
labels["sound"] = {
display = "[[sound#Noun|sound]]",
topical_categories = true,
}
labels["sound engineering"] = {
display = "[[sound engineering]]",
topical_categories = true,
}
labels["South Korean idol fandom"] = {
display = "[[South Korean]] [[idol]] [[fandom]]",
topical_categories = true,
}
labels["South Park"] = {
display = "''[[w:South Park|South Park]]''",
topical_categories = true,
}
labels["Soviet Union"] = {
aliases = {"USSR"},
display = "[[Soviet Union]]",
topical_categories = true,
}
labels["space flight"] = {
aliases = {"spaceflight", "space travel"},
display = "[[space flight]]",
topical_categories = "Space",
}
labels["space science"] = {
aliases = {"space"},
display = "[[space science]]",
topical_categories = "Space",
}
labels["spectroscopy"] = {
display = "[[spectroscopy]]",
topical_categories = true,
}
labels["speedrunning"] = {
aliases = {"speedrun", "speedruns"},
display = "[[speedrunning]]",
topical_categories = true,
}
labels["spinning"] = {
display = "[[spinning]]",
topical_categories = true,
}
labels["spiritualism"] = {
display = "[[spiritualism]]",
topical_categories = true,
}
labels["sports"] = {
aliases = {"sport"},
display = "[[sports]]",
topical_categories = true,
}
labels["squash"] = {
display = "[[w:squash (sport)|squash]]",
topical_categories = true,
}
labels["statistical mechanics"] = {
display = "[[statistical mechanics]]",
topical_categories = true,
}
labels["statistics"] = {
display = "[[statistics]]",
topical_categories = true,
}
labels["Star Wars"] = {
display = "''[[Star Wars]]''",
topical_categories = true,
}
labels["stock market"] = {
display = "[[stock market]]",
topical_categories = true,
}
labels["stock ticker symbol"] = {
aliases = {"stock symbol"},
display = "[[stock ticker symbol]]",
topical_categories = "Stock symbols for companies",
}
labels["subculture"] = {
display = "[[subculture]]",
topical_categories = "Culture",
}
labels["Sufism"] = {
aliases = {"Sufi Islam"},
display = "[[w:Sufism|Sufism]]",
topical_categories = true,
}
labels["sumo"] = {
display = "[[sumo]]",
topical_categories = true,
}
labels["supply chain"] = {
display = "[[supply chain]]",
topical_categories = true,
}
labels["surfing"] = {
display = "[[surfing#Noun|surfing]]",
topical_categories = true,
}
labels["surgery"] = {
display = "[[surgery]]",
topical_categories = true,
}
labels["surveying"] = {
display = "[[surveying#Noun|surveying]]",
topical_categories = true,
}
labels["sushi"] = {
display = "[[sushi]]",
topical_categories = true,
}
labels["swimming"] = {
display = "[[swimming#Noun|swimming]]",
topical_categories = true,
}
labels["swords"] = {
display = "[[sword]]s",
topical_categories = true,
}
labels["systematics"] = {
display = "[[systematics]]",
topical_categories = "Taxonomy",
}
labels["systems engineering"] = {
display = "[[systems engineering]]",
topical_categories = true,
}
labels["systems theory"] = {
display = "[[systems theory]]",
topical_categories = true,
}
labels["table tennis"] = {
display = "[[table tennis]]",
topical_categories = true,
}
labels["Taoism"] = {
aliases = {"Daoism"},
display = "[[Taoism]]",
topical_categories = true,
}
labels["tarot"] = {
display = "[[tarot]]",
topical_categories = "Cartomancy",
}
labels["taxation"] = {
aliases = {"tax", "taxes"},
display = "[[taxation]]",
topical_categories = true,
}
labels["taxonomy"] = {
display = "[[taxonomy]]",
topical_categories = true,
}
labels["technology"] = {
display = "[[technology]]",
topical_categories = true,
}
labels["telecommunications"] = {
aliases = {"telecommunication", "telecom"},
display = "[[telecommunications]]",
topical_categories = true,
}
labels["telegraphy"] = {
display = "[[telegraphy]]",
topical_categories = true,
}
labels["telephony"] = {
aliases = {"telephone", "telephones"},
display = "[[telephony]]",
topical_categories = true,
}
labels["television"] = {
aliases = {"TV"},
display = "[[television]]",
topical_categories = true,
}
labels["Tumblr aesthetic"] = {
display = "[[Tumblr]] aesthetic",
topical_categories = "Aesthetics",
}
labels["tennis"] = {
display = "[[tennis]]",
topical_categories = true,
}
labels["teratology"] = {
display = "[[teratology]]",
topical_categories = true,
}
labels["Tetris"] = {
display = "[[Tetris]]",
topical_categories = true,
}
labels["textiles"] = {
display = "[[textiles]]",
topical_categories = true,
}
labels["theater"] = {
aliases = {"theatre"},
display = "[[theater]]",
topical_categories = true,
}
labels["theology"] = {
display = "[[theology]]",
topical_categories = true,
}
labels["thermodynamics"] = {
display = "[[thermodynamics]]",
topical_categories = true,
}
labels["Tibetan Buddhism"] = {
display = "[[Tibetan Buddhism]]",
topical_categories = "Buddhism",
}
labels["tiddlywinks"] = {
display = "[[tiddlywinks]]",
topical_categories = true,
}
labels["TikTok aesthetic"] = {
display = "[[TikTok]] aesthetic",
topical_categories = "Aesthetics",
}
labels["time"] = {
display = "[[time]]",
topical_categories = true,
}
labels["topology"] = {
display = "[[topology]]",
topical_categories = true,
}
labels["tort law"] = {
display = "[[tort law]]",
topical_categories = "Law",
}
labels["tourism"] = {
display = "[[tourism]]",
topical_categories = true,
}
labels["toxicology"] = {
display = "[[toxicology]]",
topical_categories = true,
}
labels["trading"] = {
display = "[[trading#Noun|trading]]",
topical_categories = true,
}
labels["trading cards"] = {
display = "[[trading card]]s",
topical_categories = true,
}
labels["traditional Chinese medicine"] = {
aliases = {"TCM", "Chinese medicine"},
display = "[[traditional Chinese medicine]]",
topical_categories = true,
}
labels["traditional Korean medicine"] = {
aliases = {"Korean medicine"},
display = "{{w|traditional Korean medicine}}",
topical_categories = true,
}
labels["transgender"] = {
display = "[[transgender]]",
topical_categories = true,
}
labels["translation studies"] = {
display = "[[translation studies]]",
topical_categories = true,
}
labels["transport"] = {
aliases = {"transportation"},
display = "[[transport]]",
topical_categories = true,
}
labels["traumatology"] = {
display = "[[traumatology]]",
topical_categories = "Emergency medicine",
}
labels["travel"] = {
display = "[[travel]]",
topical_categories = true,
}
labels["trigonometry"] = {
display = "[[trigonometry]]",
topical_categories = true,
}
labels["trigonometric function"] = {
display = "[[trigonometry]]",
topical_categories = "Trigonometric functions",
}
labels["trust law"] = {
display = "[[trust law]]",
topical_categories = "Law",
}
labels["two-up"] = {
display = "[[two-up]]",
topical_categories = true,
}
labels["Twitter"] = {
aliases = {"twitter"},
display = "[[Twitter#Proper noun|Twitter]]",
topical_categories = true,
}
labels["typography"] = {
aliases = {"typesetting"},
display = "[[typography]]",
topical_categories = true,
}
labels["ufology"] = {
display = "[[ufology]]",
topical_categories = true,
}
labels["underwater diving"] = {
aliases = {"scuba", "scuba diving"},
display = "[[underwater]] [[diving#Noun|diving]]",
topical_categories = true,
}
labels["Unicode"] = {
aliases = {"Unicode standard"},
Wikipedia = true,
topical_categories = true,
}
labels["urban studies"] = {
aliases = {"urbanism", "urban planning"},
display = "[[urban studies]]",
topical_categories = true,
}
labels["urology"] = {
display = "[[urology]]",
topical_categories = true,
}
labels["Vaishnavism"] = {
display = "[[Vaishnavism]]",
topical_categories = true,
}
labels["Valentinianism"] = {
aliases = {"valentinianism"},
display = "[[w:Valentinianism|Valentinianism]]",
topical_categories = true,
}
labels["Vedic religion"] = {
aliases = {"Vedic Hinduism", "Ancient Hinduism", "ancient Hinduism", "Vedism", "Vedicism"},
display = "[[w:Historical Vedic religion|Vedic religion]]",
topical_categories = true,
}
labels["vegetable"] = {
aliases = {"vegetables"},
display = "[[vegetable]]",
topical_categories = "Vegetables",
}
labels["vehicles"] = {
aliases = {"vehicle"},
display = "[[vehicle]]s",
topical_categories = true,
}
labels["veterinary medicine"] = {
display = "[[veterinary medicine]]",
topical_categories = true,
}
labels["video compression"] = {
display = "[[w:Video compression|video compression]]",
topical_categories = true,
}
labels["video games"] = {
aliases = {"video game", "video gaming"},
display = "[[video game]]s",
topical_categories = true,
}
labels["virology"] = {
display = "[[virology]]",
topical_categories = true,
}
labels["virus"] = {
display = "[[virology]]",
topical_categories = "Viruses",
}
labels["viticulture"] = {
display = "[[viticulture]]",
topical_categories = {"Horticulture", "Wine"},
}
labels["volcanology"] = {
aliases = {"vulcanology"},
display = "[[volcanology]]",
topical_categories = true,
}
labels["volleyball"] = {
display = "[[volleyball]]",
topical_categories = true,
}
labels["voodoo"] = {
display = "[[voodoo]]",
topical_categories = true,
}
labels["water sports"] = {
aliases = {"watersport", "watersports", "water sport"},
display = "[[watersport|water sports]]",
topical_categories = true,
}
labels["weather"] = {
topical_categories = true,
}
labels["weaving"] = {
display = "[[weaving#Noun|weaving]]",
topical_categories = true,
}
labels["web design"] = {
display = "[[web design]]",
topical_categories = true,
aliases = {"Web design"}
}
labels["web development"] = {
display = "[[web development]]",
topical_categories = {"Programming", "Web design"},
}
labels["weightlifting"] = {
display = "[[weightlifting]]",
topical_categories = true,
}
labels["white supremacy"] = { -- but also this is often used to indicate white-supremacist-used jargon; cf "Nazism"
aliases = {"white nationalism", "white nationalist", "white power", "white racism", "white supremacist ideology", "white supremacism", "white supremacist"},
Wikipedia = "White supremacy",
topical_categories = "White supremacist ideology",
}
labels["wine"] = {
display = "[[wine]]",
topical_categories = true,
}
labels["winemaking"] = {
display = "[[winemaking]]",
topical_categories = "Wine",
}
labels["woodworking"] = {
display = "[[woodworking]]",
topical_categories = true,
}
labels["World War I"] = {
aliases = {"World War 1", "WWI", "WW I", "WW1", "WW 1"},
Wikipedia = "World War I",
topical_categories = true,
}
labels["World War II"] = {
aliases = {"World War 2", "WWII", "WW II", "WW2", "WW 2"},
Wikipedia = "World War II",
topical_categories = true,
}
labels["winter sports"] = {
display = "[[winter sport]]s",
topical_categories = true,
}
labels["wrestling"] = {
display = "[[wrestling#Noun|wrestling]]",
topical_categories = true,
}
labels["writing"] = {
display = "[[writing#Noun|writing]]",
topical_categories = true,
}
labels["Yazidism"] = {
aliases = {"Yezidism"},
display = "[[Yazidism]]",
topical_categories = true,
}
labels["yoga"] = {
display = "[[yoga]]",
topical_categories = true,
}
labels["zoology"] = {
display = "[[zoology]]",
topical_categories = true,
}
labels["zootomy"] = {
display = "[[zootomy]]",
topical_categories = "Animal body parts",
}
labels["Zoroastrianism"] = {
display = "[[Zoroastrianism]]",
topical_categories = true,
}
-- Labels with set-type categories
-- TODO: These are probably misuses of the label template, and should be deprecated
labels["amino acid"] = {
display = "[[biochemistry]]",
topical_categories = "Amino acids",
}
labels["architectural element"] = {
aliases = {"architectural elements"},
display = "[[architecture]]",
topical_categories = "Architectural elements",
}
labels["artistic work"] = {
display = "[[art#Noun|art]]",
topical_categories = "Artistic works",
}
labels["asterism"] = {
display = "[[uranography]]",
topical_categories = "Asterisms",
}
labels["biblical character"] = {
aliases = {"Biblical character", "biblical figure", "Biblical figure"},
display = "[[Bible|biblical]]",
topical_categories = "Biblical characters",
}
labels["bibliography"] = {
display = "[[bibliography]]",
topical_categories = true,
}
labels["bicycle parts"] = {
display = "[[w:List of bicycle parts|cycling]]",
topical_categories = true,
}
labels["book of the bible"] = {
display = "[[Bible|biblical]]",
topical_categories = "Books of the Bible",
}
labels["brass instruments"] = {
aliases = {"brass instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["canid"] = {
display = "[[zoology]]",
topical_categories = "Canids",
}
labels["carbohydrate"] = {
display = "[[biochemistry]]",
topical_categories = "Carbohydrates",
}
labels["carboxylic acid"] = {
display = "[[organic chemistry]]",
topical_categories = "Carboxylic acids",
}
labels["coenzyme"] = {
display = "[[biochemistry]]",
topical_categories = "Coenzymes",
}
labels["conspiracy theories"] = {
aliases = {"conspiracy theory", "conspiracy"},
display = "[[conspiracy theory#Noun|conspiracy theories]]",
topical_categories = true,
}
labels["constellation"] = {
display = "[[astronomy]]",
topical_categories = "Constellations",
}
labels["cookware"] = {
display = "[[cooking#Noun|cooking]]",
topical_categories = "Cookware and bakeware",
}
labels["currencies"] = { -- Don't merge with "numismatics", as the category is different.
aliases = {"currency"},
display = "[[numismatics]]",
topical_categories = "Currencies",
}
labels["dances"] = {
display = "[[dance#Noun|dance]]",
topical_categories = true,
}
labels["demonym"] = {
display = "[[demonym]]",
topical_categories = "Demonyms",
}
labels["disease"] = {
aliases = {"diseases"},
display = "[[pathology]]",
topical_categories = "Diseases",
}
labels["E number"] = {
display = "[[food]] [[manufacture]]",
plain_categories = "European food additive numbers",
}
labels["Egyptian god"] = {
display = "[[Egyptian]] [[mythology]]",
topical_categories = "Egyptian deities",
}
labels["element symbol"] = {
display = "[[chemistry]]",
plain_categories = "Symbols for chemical elements",
}
labels["enzyme"] = {
display = "[[biochemistry]]",
topical_categories = "Enzymes",
}
labels["fatty acid"] = {
display = "[[organic chemistry]]",
topical_categories = "Fatty acids",
}
labels["felid"] = {
aliases = {"cat"},
display = "[[zoology]]",
topical_categories = "Felids",
}
labels["fictional character"] = {
display = "[[fiction]]",
topical_categories = "Fictional characters",
}
labels["figure of speech"] = {
display = "[[rhetoric]]",
topical_categories = "Figures of speech",
}
labels["fish"] = {
display = "[[zoology]]",
topical_categories = true,
}
labels["footwear"] = {
display = "[[footwear]]",
topical_categories = true,
}
labels["functional group prefix"] = {
display = "[[organic chemistry]]",
topical_categories = "Functional group prefixes",
}
labels["functional group suffix"] = {
display = "[[organic chemistry]]",
topical_categories = "Functional group suffixes",
}
labels["functional programming"] = {
display = "[[functional programming]]",
topical_categories = "Programming",
}
labels["galaxy"] = {
display = "[[astronomy]]",
topical_categories = "Galaxies",
}
labels["genetic disorder"] = {
display = "[[medical]] [[genetics]]",
topical_categories = "Genetic disorders",
}
labels["Greek god"] = {
aliases = {"Greek goddess"},
display = "[[Greek]] [[mythology]]",
topical_categories = "Greek deities",
}
labels["hand games"] = {
aliases = {"hand game"},
display = "[[hand]] [[game]]s",
topical_categories = true,
}
labels["heraldic charge"] = {
aliases = {"heraldiccharge"},
display = "[[heraldry]]",
topical_categories = "Heraldic charges",
}
labels["Hindu god"] = {
display = "[[Hinduism]]",
topical_categories = "Hindu deities",
}
labels["historical currencies"] = {
aliases = {"historical currency"},
display = "[[numismatics]]",
sense_categories = "historical",
topical_categories = "Historical currencies",
}
labels["historical period"] = {
aliases = {"historical periods"},
display = "[[history]]",
topical_categories = "Historical periods",
}
labels["hormone"] = {
display = "[[biochemistry]]",
topical_categories = "Hormones",
}
labels["hydrocarbon chain prefix"] = {
display = "[[organic chemistry]]",
topical_categories = "Hydrocarbon chain prefixes",
}
labels["hydrocarbon chain suffix"] = {
display = "[[organic chemistry]]",
topical_categories = "Hydrocarbon chain suffixes",
}
labels["incoterm"] = {
display = "[[Incoterm]]",
topical_categories = "Incoterms",
}
labels["inorganic compound"] = {
display = "[[inorganic chemistry]]",
topical_categories = "Inorganic compounds",
}
labels["isotope"] = {
display = "[[physics]]",
topical_categories = "Isotopes",
}
labels["labour law"] = {
display = "[[labour law]]",
topical_categories = "Law",
}
labels["landforms"] = {
display = "[[geography]]",
topical_categories = true,
}
labels["logical fallacy"] = {
display = "[[rhetoric]]",
topical_categories = "Logical fallacies",
}
labels["lutherie"] = {
display = "[[lutherie]]",
topical_categories = true,
}
labels["Mesopotamian god"] = {
display = "[[Mesopotamian]] [[mythology]]",
topical_categories = "Mesopotamian deities",
}
labels["metamaterial"] = {
display = "[[physics]]",
topical_categories = "Metamaterials",
}
labels["military ranks"] = {
aliases = {"military rank"},
display = "[[military]]",
topical_categories = true,
}
labels["military unit"] = {
display = "[[military]]",
topical_categories = "Military units",
}
labels["mineral"] = {
display = "[[mineralogy]]",
topical_categories = "Minerals",
}
labels["mobile phones"] = {
aliases = {"cell phone", "cell phones", "mobile phone", "mobile telephony"},
display = "[[mobile telephone|mobile telephony]]",
topical_categories = true,
}
labels["muscle"] = {
display = "[[anatomy]]",
topical_categories = "Muscles",
}
labels["mushroom"] = {
aliases = {"mushrooms"},
display = "[[mycology]]",
topical_categories = "Mushrooms",
}
labels["musical instruments"] = {
aliases = {"musical instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["music genre"] = {
display = "[[music]]",
topical_categories = "Musical genres",
}
labels["musician"] = {
display = "[[music]]",
topical_categories = "Musicians",
}
labels["mythological creature"] = {
aliases = {"mythological creatures"},
display = "[[mythology]]",
topical_categories = "Mythological creatures",
}
labels["neurotoxin"] = {
display = "[[neurotoxicology]]",
topical_categories = "Neurotoxins",
}
labels["neurotransmitter"] = {
display = "[[biochemistry]]",
topical_categories = "Neurotransmitters",
}
labels["organic compound"] = {
display = "[[organic chemistry]]",
topical_categories = "Organic compounds",
}
labels["part of speech"] = {
display = "[[grammar]]",
topical_categories = "Parts of speech",
}
labels["particle"] = {
display = "[[physics]]",
topical_categories = "Subatomic particles",
}
labels["percussion instruments"] = {
aliases = {"percussion instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["pharmaceutical drug"] = {
display = "[[pharmacology]]",
topical_categories = "Pharmaceutical drugs",
}
labels["pharmaceutical effect"] = {
display = "[[pharmacology]]",
topical_categories = "Pharmaceutical effects",
}
labels["plant"] = {
display = "[[botany]]",
topical_categories = "Plants",
}
labels["plant disease"] = {
display = "[[phytopathology]]",
topical_categories = "Plant diseases",
}
labels["poison"] = {
display = "[[toxicology]]",
topical_categories = "Poisons",
}
labels["political subdivision"] = {
display = "[[government]]",
topical_categories = "Political subdivisions",
}
labels["protein"] = {
aliases = {"proteins"},
display = "[[biochemistry]]",
topical_categories = "Proteins",
}
labels["rock"] = {
display = "[[petrology]]",
topical_categories = "Rocks",
}
labels["Roman god"] = {
aliases = {"Roman goddess"},
display = "[[Roman]] [[mythology]]",
topical_categories = "Roman deities",
}
labels["schools"] = {
display = "[[education]]",
topical_categories = true,
}
labels["self-harm"] = {
aliases = {"selfharm", "self harm", "self-harm community"},
display = "[[self-harm]]",
topical_categories = true,
}
labels["SEO"] = {
display = "[[search engine optimization|SEO]]",
topical_categories = {"Internet", "Marketing"},
}
labels["skeleton"] = {
display = "[[anatomy]]",
topical_categories = true,
}
labels["standard of identity"] = {
display = "[[standard of identity|standards of identity]]",
topical_categories = "Standards of identity",
}
labels["star"] = {
display = "[[astronomy]]",
topical_categories = "Stars",
}
labels["steroid"] = {
display = "[[biochemistry]]",
topical_categories = "Steroids",
}
labels["steroid hormone"] = {
aliases = {"steroid drug"},
display = "[[biochemistry]], [[steroids]]",
topical_categories = "Hormones",
}
labels["string instruments"] = {
aliases = {"string instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["surface feature"] = {
display = "[[planetology]]",
topical_categories = "Planetary nomenclature",
}
labels["sugar acid"] = {
display = "[[organic chemistry]]",
topical_categories = "Sugar acids",
}
labels["symptom"] = {
display = "[[medicine]]",
topical_categories = "Medical signs and symptoms",
}
labels["taxonomic name"] = {
display = "[[taxonomy]]",
topical_categories = "Taxonomic names",
}
labels["tincture"] = {
display = "[[heraldry]]",
topical_categories = "Heraldic tinctures",
}
labels["veterinary disease"] = {
display = "[[veterinary medicine]]",
topical_categories = "Veterinary diseases",
}
labels["video game genre"] = {
display = "[[video game]]s",
topical_categories = "Video game genres",
}
labels["vitamin"] = {
display = "[[biochemistry]]",
topical_categories = "Vitamins",
}
labels["watercraft"] = {
display = "[[nautical]]",
topical_categories = true,
}
labels["weaponry"] = {
aliases = {"weapon", "weapons"},
display = "[[weaponry]]",
topical_categories = "Weapons",
}
labels["Wicca"] = {
display = "[[Wicca]]",
topical_categories = true,
}
labels["wiki jargon"] = {
aliases = {"wiki"},
display = "[[wiki]] [[jargon]]",
topical_categories = "Wiki",
}
labels["Wikimedia jargon"] = {
aliases = {"WMF", "WMF jargon", "Wiktionary", "Wiktionary jargon", "Wikipedia", "Wikipedia jargon"},
display = "[[w:Wikimedia Foundation|Wikimedia]] [[jargon]]",
topical_categories = "Wikimedia",
}
labels["wind instruments"] = {
aliases = {"wind instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["woodwind instruments"] = {
aliases = {"woodwind instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["xiangqi"] = {
aliases = {"Chinese chess"},
display = "[[xiangqi]]",
topical_categories = true,
}
labels["yoga pose"] = {
aliases = {"asana"},
display = "[[yoga]]",
topical_categories = "Yoga poses",
}
labels["zodiac constellations"] = {
display = "[[astronomy]]",
topical_categories = "Constellations in the zodiac",
}
-- Deprecated/do not use warning (ambiguous, unsuitable etc)
labels["deprecated label"] = {
aliases = {"emergency", "greekmyth", "industry", "morphology", "musici", "quantum", "vector"},
display = "<span style=\"color:red;\"><b>deprecated label</b></span>",
deprecated = true,
}
return require("Module:labels").finalize_data(labels)
i5b6jn88w4d7gmqyko8bl7cviazqsmm
193330
193329
2024-11-20T12:04:01Z
Lee
19
[[:en:Module:labels/data/topical]] වෙතින් එක් සංශෝධනයක්
193329
Scribunto
text/plain
local labels = {}
-- This file is split into two sections: topical labels and labels for set-type categories.
-- Each section is sorted alphabetically.
-- Topical labels
labels["ABDL"] = {
display = "[[ABDL]]",
topical_categories = true,
}
labels["Abrahamism"] = {
display = "[[Abrahamism#Noun|Abrahamism]]",
topical_categories = true,
}
labels["accounting"] = {
display = "[[accounting#Noun|accounting]]",
topical_categories = true,
}
labels["acoustics"] = {
display = "[[acoustics]]",
topical_categories = true,
}
labels["acting"] = {
display = "[[acting#Noun|acting]]",
topical_categories = true,
}
labels["advertising"] = {
display = "[[advertising#Noun|advertising]]",
topical_categories = true,
}
labels["aeronautics"] = {
display = "[[aeronautics]]",
topical_categories = true,
}
labels["aerospace"] = {
display = "[[aerospace]]",
topical_categories = true,
}
labels["aesthetic"] = {
aliases = {"aesthetics"},
display = "[[aesthetic]]",
topical_categories = "Aesthetics",
}
labels["agriculture"] = {
aliases = {"farming"},
display = "[[agriculture]]",
topical_categories = true,
}
labels["Ahmadiyya"] = {
aliases = {"Ahmadiyyat", "Ahmadi"},
display = "[[Ahmadiyya]]",
topical_categories = true,
}
labels["aircraft"] = {
display = "[[aircraft]]",
topical_categories = true,
}
labels["alchemy"] = {
display = "[[alchemy]]",
topical_categories = true,
}
labels["alcoholic beverages"] = {
aliases = {"alcohol"},
display = "[[alcoholic#Adjective|alcoholic]] [[beverage]]s",
topical_categories = true,
}
labels["alcoholism"] = {
display = "[[alcoholism]]",
topical_categories = true,
}
labels["algebra"] = {
display = "[[algebra]]",
topical_categories = true,
}
labels["algebraic geometry"] = {
display = "[[algebraic geometry]]",
topical_categories = true,
}
labels["algebraic topology"] = {
display = "[[algebraic topology]]",
topical_categories = true,
}
labels["alt-right"] = {
aliases = {"Alt-right", "altright", "Altright"},
display = "[[alt-right]]",
topical_categories = true,
}
labels["alternative medicine"] = {
display = "[[alternative medicine]]",
topical_categories = true,
}
labels["amateur radio"] = {
aliases = {"ham radio"},
display = "[[amateur radio]]",
topical_categories = true,
}
labels["American football"] = {
display = "[[American football]]",
topical_categories = "Football (American)",
}
labels["analytic geometry"] = {
display = "[[analytic geometry]]",
topical_categories = "Geometry",
}
labels["analytical chemistry"] = {
display = "[[analytical]] [[chemistry]]",
topical_categories = true,
}
labels["anarchism"] = {
display = "[[anarchism]]",
topical_categories = true,
}
labels["anatomy"] = {
display = "[[anatomy]]",
topical_categories = true,
}
labels["Ancient Greece"] = {
display = "[[Ancient Greece]]",
topical_categories = true,
}
labels["Ancient Rome"] = {
display = "[[Ancient Rome]]",
topical_categories = true,
}
labels["Anglicanism"] = {
aliases = {"Anglican"},
display = "[[Anglicanism]]",
topical_categories = true,
}
labels["animation"] = {
display = "[[animation]]",
topical_categories = true,
}
labels["anime"] = {
display = "[[anime]]",
topical_categories = "Japanese fiction",
}
labels["anthropology"] = {
display = "[[anthropology]]",
topical_categories = true,
}
labels["arachnology"] = {
display = "[[arachnology]]",
topical_categories = true,
}
labels["Arabian god"] = {
display = "[[Arabian]] [[mythology]]",
topical_categories = "Arabian deities",
}
labels["archaeological culture"] = {
aliases = {"archeological culture", "archaeological cultures", "archeological cultures"},
display = "[[archaeology]]",
topical_categories = "Archaeological cultures",
}
labels["archaeology"] = {
aliases = {"archeology"},
display = "[[archaeology]]",
topical_categories = true,
}
labels["archery"] = {
display = "[[archery]]",
topical_categories = true,
}
labels["architecture"] = {
display = "[[architecture]]",
topical_categories = true,
}
labels["arithmetic"] = {
display = "[[arithmetic]]",
topical_categories = true,
}
labels["Armenian mythology"] = {
display = "[[Armenian]] [[mythology]]",
topical_categories = true,
}
labels["art"] = {
aliases = {"arts"},
display = "[[art#Noun|art]]",
topical_categories = true,
}
labels["artificial intelligence"] = {
aliases = {"AI"},
display = "[[artificial intelligence]]",
topical_categories = true,
}
labels["artillery"] = {
display = "[[weaponry]]",
topical_categories = true,
}
labels["Arthurian legend"] = {
aliases = {"Arthurian mythology"},
display = "[[w:Arthurian legend|Arthurian legend]]",
topical_categories = "Arthurian mythology",
}
labels["astrology"] = {
aliases = {"horoscope", "zodiac"},
display = "[[astrology]]",
topical_categories = true,
}
labels["astronautics"] = {
aliases = {"rocketry"},
display = "[[astronautics]]",
topical_categories = true,
}
labels["astronomy"] = {
display = "[[astronomy]]",
topical_categories = true,
}
labels["astrophysics"] = {
display = "[[astrophysics]]",
topical_categories = true,
}
labels["Asturian mythology"] = {
display = "[[Asturian]] [[mythology]]",
topical_categories = true,
}
labels["athletics"] = {
display = "[[athletics]]",
topical_categories = true,
}
labels["Australian Aboriginal mythology"] = {
display = "[[w:Australian Aboriginal religion and mythology|Australian Aboriginal mythology]]",
topical_categories = true,
}
labels["Australian rules football"] = {
display = "[[Australian rules football]]",
topical_categories = true,
}
labels["autism"] = {
display = "[[autism]]",
topical_categories = true,
}
labels["automotive"] = {
aliases = {"automotives"},
display = "[[automotive]]",
topical_categories = true,
}
labels["aviation"] = {
aliases = {"air transport"},
display = "[[aviation]]",
topical_categories = true,
}
labels["backgammon"] = {
display = "[[backgammon]]",
topical_categories = true,
}
labels["bacteria"] = {
display = "[[bacteriology]]",
topical_categories = true,
}
labels["bacteriology"] = {
display = "[[bacteriology]]",
topical_categories = true,
}
labels["badminton"] = {
display = "[[badminton]]",
topical_categories = true,
}
labels["baking"] = {
display = "[[baking#Noun|baking]]",
topical_categories = true,
}
labels["ball games"] = {
aliases = {"ball sports"},
display = "[[ball game]]s",
topical_categories = true,
}
labels["ballet"] = {
display = "[[ballet]]",
topical_categories = true,
}
labels["Bangladeshi politics"] = {
display = "[[w:Politics of Bangladesh|Bangladeshi politics]]",
topical_categories = true,
}
labels["banking"] = {
display = "[[banking#Noun|banking]]",
topical_categories = true,
}
labels["baseball"] = {
display = "[[baseball]]",
topical_categories = true,
}
labels["basketball"] = {
display = "[[basketball]]",
topical_categories = true,
}
labels["BDSM"] = {
display = "[[BDSM]]",
topical_categories = true,
}
labels["beekeeping"] = {
display = "[[beekeeping]]",
topical_categories = true,
}
labels["beer"] = {
display = "[[beer]]",
topical_categories = true,
}
labels["betting"] = {
display = "[[gambling#Noun|gambling]]",
topical_categories = true,
}
labels["biblical"] = {
aliases = {"Bible", "bible", "Biblical"},
display = "[[Bible|biblical]]",
topical_categories = "Bible",
}
labels["billiards"] = {
display = "[[billiards]]",
topical_categories = true,
}
labels["bingo"] = {
display = "[[bingo]]",
topical_categories = true,
}
labels["biochemistry"] = {
display = "[[biochemistry]]",
topical_categories = true,
}
labels["biology"] = {
display = "[[biology]]",
topical_categories = true,
}
labels["biotechnology"] = {
display = "[[biotechnology]]",
topical_categories = true,
}
labels["birdwatching"] = {
display = "[[birdwatching#Noun|birdwatching]]",
topical_categories = true,
}
labels["blacksmithing"] = {
display = "[[blacksmithing]]",
topical_categories = true,
}
labels["blogging"] = {
display = "[[blogging#Noun|blogging]]",
topical_categories = "Internet",
}
labels["board games"] = {
aliases = {"board game"},
display = "[[board game]]s",
topical_categories = true,
}
labels["board sports"] = {
display = "[[boardsport|board sports]]",
topical_categories = true,
}
labels["bodybuilding"] = {
display = "[[bodybuilding#Noun|bodybuilding]]",
topical_categories = true,
}
labels["botany"] = {
display = "[[botany]]",
topical_categories = true,
}
labels["bowling"] = {
display = "[[bowling#Noun|bowling]]",
topical_categories = true,
}
labels["bowls"] = {
aliases = {"lawn bowls", "crown green bowls"},
display = "[[bowls]]",
topical_categories = "Bowls (game)",
}
labels["boxing"] = {
display = "[[boxing#Noun|boxing]]",
topical_categories = true,
}
labels["brewing"] = {
display = "[[brewing#Noun|brewing]]",
topical_categories = true,
}
labels["bridge"] = {
display = "[[bridge#English:_game|bridge]]",
topical_categories = true,
}
labels["broadcasting"] = {
display = "[[broadcasting#Noun|broadcasting]]",
topical_categories = true,
}
labels["bryology"] = {
display = "[[bryology]]",
topical_categories = true,
}
labels["Buddhism"] = {
display = "[[Buddhism]]",
topical_categories = true,
}
labels["Buddhist deity"] = {
aliases = {"Buddhist goddess", "Buddhist god"},
display = "[[Buddhism]]",
topical_categories = "Buddhist deities",
}
labels["bullfighting"] = {
display = "[[bullfighting]]",
topical_categories = true,
}
labels["business"] = {
aliases = {"professional"},
display = "[[business]]",
topical_categories = true,
}
labels["Byzantine Empire"] = {
display = "[[Byzantine Empire]]",
topical_categories = true,
}
labels["calculus"] = {
display = "[[calculus]]",
topical_categories = true,
}
labels["calligraphy"] = {
display = "[[calligraphy]]",
topical_categories = true,
}
labels["Canadian football"] = {
display = "[[Canadian football]]",
topical_categories = true,
}
labels["canoeing"] = {
display = "[[canoeing#Noun|canoeing]]",
topical_categories = "Water sports",
}
labels["capitalism"] = {
display = "[[capitalism]]",
topical_categories = true,
}
labels["card games"] = {
aliases = {"cards", "card game", "playing card"},
display = "[[card game]]s",
topical_categories = true,
}
labels["cardiology"] = {
display = "[[cardiology]]",
topical_categories = true,
}
labels["carpentry"] = {
display = "[[carpentry]]",
topical_categories = true,
}
labels["cartography"] = {
display = "[[cartography]]",
topical_categories = true,
}
labels["cartomancy"] = {
display = "[[cartomancy]]",
topical_categories = true,
}
labels["castells"] = {
display = "[[castells]]",
topical_categories = true,
}
labels["category theory"] = {
display = "[[category theory]]",
topical_categories = true,
}
labels["Catholicism"] = {
aliases = {"catholicism", "Catholic", "catholic"},
display = "[[Catholicism]]",
topical_categories = true,
}
labels["caving"] = {
display = "[[caving#Noun|caving]]",
topical_categories = true,
}
labels["cellular automata"] = {
display = "[[cellular automata]]",
topical_categories = true,
}
labels["Celtic mythology"] = {
display = "[[Celtic]] [[mythology]]",
topical_categories = true,
}
labels["ceramics"] = {
display = "[[ceramics]]",
topical_categories = true,
}
labels["cheerleading"] = {
display = "[[cheerleading#Noun|cheerleading]]",
topical_categories = true,
}
labels["chemical element"] = {
display = "[[chemistry]]",
topical_categories = "Chemical elements",
}
labels["chemical engineering"] = {
display = "[[chemical engineering]]",
topical_categories = true,
}
labels["chemistry"] = {
display = "[[chemistry]]",
topical_categories = true,
}
labels["chess"] = {
display = "[[chess]]",
topical_categories = true,
}
labels["children's games"] = {
display = "[[children|children's]] [[game]]s",
topical_categories = true,
}
labels["Church of England"] = {
aliases = {"C of E", "CofE"},
Wikipedia = "Church of England",
topical_categories = true,
}
labels["Chinese astronomy"] = {
display = "[[Chinese]] [[astronomy]]",
topical_categories = true,
}
labels["Chinese calligraphy"] = {
display = "[[Chinese]] [[calligraphy]]",
topical_categories = "Calligraphy",
}
labels["Chinese constellation"] = {
display = "[[Chinese]] [[astronomy]]",
topical_categories = "Constellations",
}
labels["Chinese folk religion"] = {
display = "[[Chinese]] [[folk religion]]",
topical_categories = "Religion",
}
labels["Chinese linguistics"] = {
display = "[[Chinese]] [[linguistics]]",
topical_categories = "Linguistics",
}
labels["Chinese mythology"] = {
display = "[[Chinese]] [[mythology]]",
topical_categories = true,
}
labels["Chinese philosophy"] = {
display = "[[Chinese]] [[philosophy]]",
topical_categories = true,
}
labels["Chinese phonetics"] = {
display = "[[Chinese]] [[phonetics]]",
topical_categories = true,
}
labels["Chinese religion"] = {
display = "[[Chinese]] [[religion]]",
topical_categories = "Religion",
}
labels["Chinese star"] = {
display = "[[Chinese]] [[astronomy]]",
topical_categories = "Stars",
}
labels["Christianity"] = {
aliases = {"christianity", "Christian", "christian"},
display = "[[Christianity]]",
topical_categories = true,
}
labels["Church of the East"] = {
display = "[[Church of the East]]",
topical_categories = true,
}
labels["cinematography"] = {
aliases = {"filmology"},
display = "[[cinematography]]",
topical_categories = true,
}
labels["cladistics"] = {
display = "[[cladistics]]",
topical_categories = "Taxonomy",
}
labels["classical mechanics"] = {
display = "[[classical mechanics]]",
topical_categories = true,
}
labels["classical studies"] = {
display = "[[classical studies]]",
topical_categories = true,
}
labels["climatology"] = {
display = "[[climatology]]",
topical_categories = true,
}
labels["climate change"] = {
display = "[[climate change]]",
topical_categories = true,
}
labels["climbing"] = {
aliases = {"rock climbing"},
display = "[[climbing#Noun|climbing]]",
topical_categories = true,
}
labels["clinical psychology"] = {
display = "[[clinical]] [[psychology]]",
topical_categories = true,
}
labels["clothing"] = {
display = "[[clothing#Noun|clothing]]",
topical_categories = true,
}
labels["cloud computing"] = {
display = "[[cloud computing]]",
topical_categories = "Computing",
}
labels["collectible card games"] = {
aliases = {"trading card games", "collectible cards", "trading cards"},
display = "collectible card games",
topical_categories = true,
}
labels["combinatorics"] = {
display = "[[combinatorics]]",
topical_categories = true,
}
labels["comedy"] = {
display = "[[comedy]]",
topical_categories = true,
}
labels["commercial law"] = {
display = "[[commercial#Adjective|commercial]] [[law]]",
topical_categories = true,
}
labels["comics"] = {
display = "[[comics]]",
topical_categories = true,
}
labels["communication"] = {
aliases = {"communications"},
display = "[[communication]]",
topical_categories = true,
}
labels["communism"] = {
aliases = {"Communism"},
display = "[[communism]]",
topical_categories = true,
}
labels["compilation"] = {
aliases = {"compiler"},
display = "[[software]] [[compilation]]",
topical_categories = true,
}
labels["complex analysis"] = {
display = "[[complex analysis]]",
topical_categories = true,
}
labels["computational linguistics"] = {
display = "[[computational linguistics]]",
topical_categories = true,
}
labels["computer chess"] = {
display = "[[computer chess]]",
topical_categories = true,
}
labels["computer games"] = {
aliases = {"computer game", "computer gaming"},
display = "[[computer game]]s",
topical_categories = "Video games",
}
labels["computer graphics"] = {
display = "[[computer graphics]]",
topical_categories = true,
}
labels["computer hardware"] = {
display = "[[computer]] [[hardware]]",
topical_categories = true,
}
labels["computer languages"] = {
aliases = {"computer language", "programming language"},
display = "[[computer language]]s",
topical_categories = true,
}
labels["computer science"] = {
aliases = {"comp sci", "CompSci", "compsci"},
display = "[[computer science]]",
topical_categories = true,
}
labels["computer security"] = {
display = "[[computer security]]",
topical_categories = true,
}
labels["computing"] = {
aliases = {"computer", "computers"},
display = "[[computing#Noun|computing]]",
topical_categories = true,
}
labels["computing theory"] = {
aliases = {"comptheory"},
display = "[[computing#Noun|computing]] [[theory]]",
topical_categories = "Theory of computing",
}
labels["conchology"] = {
display = "[[conchology]]",
topical_categories = true,
}
labels["Confucianism"] = {
display = "[[Confucianism]]",
topical_categories = true,
}
labels["conlanging"] = {
aliases = {"constructed languages", "constructed language"},
display = "[[conlanging]]",
topical_categories = true,
}
labels["conservatism"] = {
display = "[[conservatism]]",
topical_categories = true,
}
labels["construction"] = {
display = "[[construction]]",
topical_categories = true,
}
labels["cooking"] = {
aliases = {"culinary", "cuisine", "cookery", "gastronomy"},
display = "[[cooking#Noun|cooking]]",
topical_categories = true,
}
labels["copyright"] = {
aliases = {"copyright law", "intellectual property", "intellectual property law", "IP law"},
display = "[[copyright]] [[law]]",
topical_categories = true,
}
labels["cosmetics"] = {
aliases = {"cosmetology"},
display = "[[cosmetics]]",
topical_categories = true,
}
labels["cosmology"] = {
display = "[[cosmology]]",
topical_categories = true,
}
labels["creationism"] = {
aliases = {"baraminology"},
display = "[[creationism#English|creationism]]",
topical_categories = true,
}
labels["cribbage"] = {
display = "[[cribbage]]",
topical_categories = true,
}
labels["cricket"] = {
display = "[[cricket]]",
topical_categories = true,
}
labels["crime"] = {
display = "[[crime]]",
topical_categories = true,
}
labels["criminal law"] = {
display = "[[criminal law]]",
topical_categories = true,
}
labels["criminology"] = {
display = "[[criminology]]",
topical_categories = true,
}
labels["croquet"] = {
display = "[[croquet]]",
topical_categories = true,
}
labels["cryptocurrencies"] = {
aliases = {"cryptocurrency"},
display = "[[cryptocurrency|cryptocurrencies]]",
topical_categories = "Cryptocurrency",
}
labels["cryptography"] = {
display = "[[cryptography]]",
topical_categories = true,
}
labels["cryptozoology"] = {
display = "[[cryptozoology]]",
topical_categories = true,
}
labels["crystallography"] = {
display = "[[crystallography]]",
topical_categories = true,
}
labels["cultural anthropology"] = {
display = "[[cultural anthropology]]",
topical_categories = true,
}
labels["curling"] = {
display = "[[curling]]",
topical_categories = true,
}
labels["cybernetics"] = {
display = "[[cybernetics]]",
topical_categories = true,
}
labels["cycle racing"] = {
display = "[[w:cycle sport|cycle racing]]",
topical_categories = true,
}
labels["cycling"] = {
aliases = {"bicycling"},
display = "[[cycling#Noun|cycling]]",
topical_categories = true,
}
labels["cytology"] = {
display = "[[cytology]]",
topical_categories = true,
}
labels["dance"] = {
aliases = {"dancing"},
display = "[[dance#Noun|dance]]",
topical_categories = true,
}
labels["darts"] = {
display = "[[darts]]",
topical_categories = true,
}
labels["data management"] = {
display = "[[data management]]",
topical_categories = true,
}
labels["data modeling"] = {
display = "[[data modeling]]",
topical_categories = true,
}
labels["databases"] = {
aliases = {"database"},
display = "[[database]]s",
topical_categories = true,
}
labels["decision theory"] = {
display = "[[decision theory]]",
topical_categories = true,
}
labels["deltiology"] = {
display = "[[deltiology]]",
topical_categories = true,
}
labels["demography"] = {
display = "[[demography]]",
topical_categories = true,
}
labels["demoscene"] = {
topical_categories = true,
}
labels["dentistry"] = {
display = "[[dentistry]]",
topical_categories = true,
}
labels["dermatology"] = {
display = "[[dermatology]]",
topical_categories = true,
}
labels["design"] = {
display = "[[design#Noun|design]]",
topical_categories = true,
}
labels["dice games"] = {
aliases = {"dice"},
display = "[[dice game]]s",
topical_categories = true,
}
labels["dictation"] = {
display = "[[dictation]]",
topical_categories = true,
}
labels["differential geometry"] = {
display = "[[differential geometry]]",
topical_categories = true,
}
labels["diplomacy"] = {
display = "[[diplomacy]]",
topical_categories = true,
}
labels["disc golf"] = {
display = "[[disc golf]]",
topical_categories = true,
}
labels["divination"] = {
display = "[[divination]]",
topical_categories = true,
}
labels["diving"] = {
display = "[[diving#Noun|diving]]",
topical_categories = true,
}
labels["dominoes"] = {
display = "[[dominoes]]",
topical_categories = true,
}
labels["dou dizhu"] = {
display = "[[w:Dou dizhu|dou dizhu]]",
topical_categories = true,
}
labels["drama"] = {
display = "[[drama]]",
topical_categories = true,
}
labels["dressage"] = {
display = "[[dressage]]",
topical_categories = true,
}
labels["earth science"] = {
display = "[[earth science]]",
topical_categories = "Earth sciences",
}
labels["Eastern Catholicism"] = {
aliases = {"Eastern Catholic"},
display = "[[w:Eastern Catholic Churches|Eastern Catholicism]]",
topical_categories = true,
}
labels["Eastern Orthodoxy"] = {
aliases = {"Eastern Orthodox"},
display = "[[Eastern Orthodoxy]]",
topical_categories = true,
}
labels["eating disorders"] = {
aliases = {"eating disorder"},
display = "[[eating disorder]]s",
topical_categories = true,
}
labels["ecclesiastical"] = {
display = "[[ecclesiastical]]",
topical_categories = "Christianity",
}
labels["ecology"] = {
display = "[[ecology]]",
topical_categories = true,
}
labels["economics"] = {
display = "[[economics]]",
topical_categories = true,
}
labels["education"] = {
display = "[[education]]",
topical_categories = true,
}
labels["Egyptian god"] = {
aliases = {"Egyptian goddess", "Egyptian deity"},
display = "[[Egyptian]] [[mythology]]",
topical_categories = "Egyptian deities",
}
labels["Egyptian mythology"] = {
display = "[[Egyptian]] [[mythology]]",
topical_categories = true,
}
labels["Egyptology"] = {
display = "[[Egyptology]]",
topical_categories = "Ancient Egypt",
}
labels["electrencephalography"] = {
display = "[[electrencephalography]]",
topical_categories = true,
}
labels["electrical engineering"] = {
display = "[[electrical engineering]]",
topical_categories = true,
}
labels["electricity"] = {
display = "[[electricity]]",
topical_categories = true,
}
labels["electrodynamics"] = {
display = "[[electrodynamics]]",
topical_categories = true,
}
labels["electromagnetism"] = {
display = "[[electromagnetism]]",
topical_categories = true,
}
labels["electronics"] = {
display = "[[electronics]]",
topical_categories = true,
}
labels["embryology"] = {
display = "[[embryology]]",
topical_categories = true,
}
labels["emergency services"] = {
display = "[[emergency services]]",
topical_categories = true,
}
labels["emergency medicine"] = {
display = "[[emergency medicine]]",
topical_categories = true,
}
labels["endocrinology"] = {
display = "[[endocrinology]]",
topical_categories = true,
}
labels["engineering"] = {
display = "[[engineering#Noun|engineering]]",
topical_categories = true,
}
labels["enterprise engineering"] = {
display = "[[enterprise engineering]]",
topical_categories = true,
}
labels["entomology"] = {
display = "[[entomology]]",
topical_categories = true,
}
labels["epidemiology"] = {
display = "[[epidemiology]]",
topical_categories = true,
}
labels["epistemology"] = {
display = "[[epistemology]]",
topical_categories = true,
}
labels["equestrianism"] = {
aliases = {"equestrian", "horses", "horsemanship"},
display = "[[equestrianism]]",
topical_categories = true,
}
labels["espionage"] = {
display = "[[espionage]]",
topical_categories = true,
}
labels["ethics"] = {
display = "[[ethics]]",
topical_categories = true,
}
labels["ethnography"] = {
display = "[[ethnography]]",
topical_categories = true,
}
labels["ethology"] = {
display = "[[ethology]]",
topical_categories = true,
}
labels["European folklore"] = {
display = "[[European]] [[folklore]]",
topical_categories = true,
}
labels["European Union"] = {
aliases = {"EU"},
display = "[[European Union]]",
topical_categories = true,
}
labels["evolutionary theory"] = {
aliases = {"evolutionary biology"},
display = "[[evolutionary theory]]",
topical_categories = true,
}
labels["exercise"] = {
display = "[[exercise]]",
topical_categories = true,
}
labels["eye color"] = {
display = "[[eye]] [[color]]",
topical_categories = "Eye colors",
}
labels["falconry"] = {
display = "[[falconry]]",
topical_categories = true,
}
labels["fantasy"] = {
display = "[[fantasy]]",
topical_categories = true,
}
labels["farriery"] = {
display = "[[farriery]]",
topical_categories = true,
}
labels["fascism"] = {
display = "[[fascism]]",
topical_categories = true,
}
labels["fashion"] = {
display = "[[fashion]]",
topical_categories = true,
}
labels["feminism"] = {
display = "[[feminism]]",
topical_categories = true,
}
labels["fencing"] = {
display = "[[fencing#Noun|fencing]]",
topical_categories = true,
}
labels["feudalism"] = {
display = "[[feudalism|feudalism]]",
topical_categories = true,
}
labels["fiction"] = {
aliases = {"fictional"},
display = "[[fiction]]",
topical_categories = true,
}
labels["field hockey"] = {
display = "[[field hockey]]",
topical_categories = true,
}
labels["figure skating"] = {
display = "[[figure skating]]",
topical_categories = true,
}
labels["file format"] = {
display = "[[file format]]",
topical_categories = "File formats",
}
labels["film"] = {
display = "[[film#Noun|film]]",
topical_categories = true,
}
labels["film genre"] = {
aliases = {"cinema"},
display = "[[film#Noun|film]]",
topical_categories = "Film genres",
}
labels["finance"] = {
display = "[[finance#Noun|finance]]",
topical_categories = true,
}
labels["Finnic mythology"] = {
aliases = {"Finnish mythology"},
display = "[[Finnic]] [[mythology]]",
topical_categories = true,
}
labels["firearms"] = {
aliases = {"firearm"},
display = "[[firearm]]s",
topical_categories = true,
}
labels["firefighting"] = {
display = "[[firefighting]]",
topical_categories = true,
}
labels["fishing"] = {
aliases = {"angling"},
display = "[[fishing#Noun|fishing]]",
topical_categories = true,
}
labels["flamenco"] = {
display = "[[flamenco]]",
topical_categories = true,
}
labels["fluid dynamics"] = {
display = "[[fluid dynamics]]",
topical_categories = true,
}
labels["fluid mechanics"] = {
display = "[[fluid mechanics]]",
topical_categories = "Mechanics",
}
labels["folklore"] = {
display = "[[folklore]]",
topical_categories = true,
}
labels["forestry"] = {
display = "[[forestry]]",
topical_categories = true,
}
labels["Forteana"] = {
display = "[[Forteana]]",
topical_categories = true,
}
labels["Freemasonry"] = {
aliases = {"freemasonry"},
display = "[[Freemasonry]]",
topical_categories = true,
}
labels["functional analysis"] = {
display = "[[functional analysis]]",
topical_categories = true,
}
labels["furniture"] = {
display = "[[furniture]]",
topical_categories = true,
}
labels["furry fandom"] = {
display = "[[furry#Noun|furry]] [[fandom]]",
topical_categories = true,
}
labels["fuzzy logic"] = {
display = "[[fuzzy logic]]",
topical_categories = true,
}
labels["Gaelic football"] = {
display = "[[Gaelic football]]",
topical_categories = true,
}
labels["gambling"] = {
display = "[[gambling#Noun|gambling]]",
topical_categories = true,
}
labels["game theory"] = {
display = "[[game theory]]",
topical_categories = true,
}
labels["games"] = {
aliases = {"game"},
display = "[[game#Noun|games]]",
topical_categories = true,
}
labels["gaming"] = {
display = "[[gaming#Noun|gaming]]",
topical_categories = true,
}
labels["genealogy"] = {
display = "[[genealogy]]",
topical_categories = true,
}
labels["general semantics"] = {
display = "[[general semantics]]",
topical_categories = true,
}
labels["genetics"] = {
display = "[[genetics]]",
topical_categories = true,
}
labels["geography"] = {
display = "[[geography]]",
topical_categories = true,
}
labels["geology"] = {
display = "[[geology]]",
topical_categories = true,
}
labels["geological period"] = {
Wikipedia = "Geological period",
topical_categories = "Geological periods",
}
labels["geometry"] = {
display = "[[geometry]]",
topical_categories = true,
}
labels["geomorphology"] = {
display = "[[geomorphology]]",
topical_categories = true,
}
labels["geopolitics"] = {
display = "[[geopolitics]]",
topical_categories = true,
}
labels["gerontology"] = {
display = "[[gerontology]]",
topical_categories = true,
}
labels["glassblowing"] = {
display = "[[glassblowing]]",
topical_categories = true,
}
labels["Gnosticism"] = {
aliases = {"gnosticism"},
display = "[[Gnosticism]]",
topical_categories = true,
}
labels["go"] = {
aliases = {"Go", "game of go", "game of Go"},
display = "{{l|en|go|id=game}}",
topical_categories = true,
}
labels["golf"] = {
display = "[[golf]]",
topical_categories = true,
}
labels["government"] = {
display = "[[government]]",
topical_categories = true,
}
labels["grammar"] = {
display = "[[grammar]]",
topical_categories = true,
}
labels["grammatical case"] = {
display = "[[grammar]]",
topical_categories = "Grammatical cases",
}
labels["grammatical mood"] = {
display = "[[grammar]]",
topical_categories = "Grammatical moods",
}
labels["graph theory"] = {
display = "[[graph theory]]",
topical_categories = true,
}
labels["graphic design"] = {
display = "[[graphic design]]",
topical_categories = true,
}
labels["graphical user interface"] = {
aliases = {"GUI"},
display = "[[graphical user interface]]",
topical_categories = true,
}
labels["Greek mythology"] = {
display = "[[Greek]] [[mythology]]",
topical_categories = true,
}
labels["group theory"] = {
display = "[[group theory]]",
topical_categories = true,
}
labels["gun mechanisms"] = {
aliases = {"firearm mechanism", "firearm mechanisms", "gun mechanism"},
display = "[[firearm]]s",
topical_categories = true,
}
labels["gun sports"] = {
aliases = {"shooting sports"},
display = "[[gun]] [[sport]]s",
topical_categories = true,
}
labels["gymnastics"] = {
display = "[[gymnastics]]",
topical_categories = true,
}
labels["gynaecology"] = {
aliases = {"gynecology"},
display = "[[gynaecology]]",
topical_categories = true,
}
labels["hair color"] = {
display = "[[hair]] [[color]]",
topical_categories = "Hair colors",
}
labels["hairdressing"] = {
display = "[[hairdressing]]",
topical_categories = true,
}
labels["handball"] = {
display = "[[handball]]",
topical_categories = true,
}
labels["Hawaiian mythology"] = {
display = "[[Hawaiian]] [[mythology]]",
topical_categories = true,
}
labels["headwear"] = {
display = "[[clothing#Noun|clothing]]",
topical_categories = true,
}
labels["healthcare"] = {
display = "[[healthcare]]",
topical_categories = true,
}
labels["helminthology"] = {
display = "[[helminthology]]",
topical_categories = true,
}
labels["hematology"] = {
aliases = {"haematology"},
display = "[[hematology]]",
topical_categories = true,
}
labels["heraldry"] = {
display = "[[heraldry]]",
topical_categories = true,
}
labels["herbalism"] = {
display = "[[herbalism]]",
topical_categories = true,
}
labels["herpetology"] = {
display = "[[herpetology]]",
topical_categories = true,
}
labels["Hinduism"] = {
display = "[[Hinduism]]",
topical_categories = true,
}
labels["Hindutva"] = {
display = "[[Hindutva]]",
topical_categories = true,
}
labels["historiography"] = {
display = "[[historiography]]",
topical_categories = true,
}
labels["history"] = {
display = "[[history]]",
topical_categories = true,
}
labels["historical linguistics"] = {
display = "[[historical linguistics]]",
topical_categories = "Linguistics",
}
labels["hockey"] = {
display = "[[field hockey]] or [[ice hockey]]",
topical_categories = {"Field hockey", "Ice hockey"},
}
labels["homeopathy"] = {
display = "[[homeopathy]]",
topical_categories = true,
}
labels["horse color"] = {
display = "[[horse]] [[color]]",
topical_categories = "Horse colors",
}
labels["horse racing"] = {
display = "[[horse racing]]",
topical_categories = true,
}
labels["horticulture"] = {
aliases = {"gardening"},
display = "[[horticulture]]",
topical_categories = true,
}
labels["HTML"] = {
display = "[[Hypertext Markup Language|HTML]]",
topical_categories = true,
}
labels["human resources"] = {
display = "[[human resources]]",
topical_categories = true,
}
labels["humanities"] = {
display = "[[humanities]]",
topical_categories = true,
}
labels["hunting"] = {
display = "[[hunting#Noun|hunting]]",
topical_categories = true,
}
labels["hurling"] = {
display = "[[hurling#Noun|hurling]]",
topical_categories = true,
}
labels["hydroacoustics"] = {
Wikipedia = "Hydroacoustics",
topical_categories = true,
}
labels["hydrology"] = {
display = "[[hydrology]]",
topical_categories = true,
}
labels["ice hockey"] = {
display = "[[ice hockey]]",
topical_categories = true,
}
labels["ichthyology"] = {
display = "[[ichthyology]]",
topical_categories = true,
}
labels["idol fandom"] = {
display = "[[idol]] [[fandom]]",
topical_categories = true,
}
labels["immunochemistry"] = {
display = "[[immunochemistry]]",
topical_categories = true,
}
labels["immunology"] = {
display = "[[immunology]]",
topical_categories = true,
}
labels["import/export"] = {
display = "[[import#Noun|import]]/[[export#Noun|export]]",
topical_categories = true,
}
labels["Indo-European studies"] = {
aliases = {"indo-european studies"},
display = "[[Indo-European studies]]",
topical_categories = true,
}
labels["information science"] = {
display = "[[information science]]",
topical_categories = true,
}
labels["information theory"] = {
display = "[[information theory]]",
topical_categories = true,
}
labels["information technology"] = {
aliases = {"IT"},
display = "[[information technology]]",
topical_categories = "Computing",
}
labels["inheritance law"] = {
display = "[[inheritance law]]",
topical_categories = true,
}
labels["inorganic chemistry"] = {
display = "[[inorganic chemistry]]",
topical_categories = true,
}
labels["insurance"] = {
display = "[[insurance]]",
topical_categories = true,
}
labels["international law"] = {
display = "[[international law]]",
topical_categories = true,
}
labels["international relations"] = {
display = "[[international relations]]",
topical_categories = true,
}
labels["international standards"] = {
aliases = {"international standard", "ISO", "International Organization for Standardization", "International Organisation for Standardisation"},
Wikipedia = "International standard",
}
labels["Internet"] = {
aliases = {"internet", "online"},
display = "[[Internet]]",
topical_categories = true,
}
labels["Iranian mythology"] = {
display = "[[Iranian]] [[mythology]]",
topical_categories = true,
}
labels["Irish mythology"] = {
display = "[[Irish]] [[mythology]]",
topical_categories = true,
}
labels["Islam"] = {
aliases = {"islam", "Islamic", "Muslim"},
Wikipedia = "Islam",
topical_categories = true,
}
labels["Islamic finance"] = {
aliases = {"Islamic banking", "Muslim finance", "Muslim banking", "Sharia-compliant finance"},
Wikipedia = "Islamic finance",
topical_categories = true,
}
labels["Islamic law"] = {
aliases = {"Islamic legal", "Sharia"},
Wikipedia = "Sharia",
topical_categories = true,
}
labels["Jainism"] = {
display = "[[Jainism]]",
topical_categories = true,
}
labels["Japanese god"] = {
display = "[[Japanese]] [[mythology]]",
topical_categories = "Japanese deities",
}
labels["Japanese mythology"] = {
display = "[[Japanese]] [[mythology]]",
topical_categories = true,
}
labels["Java programming language"] = {
aliases = {"JavaPL", "Java PL"},
display = "[[w:Java (programming language)|Java programming language]]",
topical_categories = true,
}
labels["jazz"] = {
display = "[[jazz#Noun|jazz]]",
topical_categories = true,
}
labels["jewelry"] = {
aliases = {"jewellery"},
display = "[[jewelry]]",
topical_categories = true,
}
labels["Jewish law"] = {
aliases = {"Halacha", "Halachah", "Halakha", "Halakhah", "halacha", "halachah", "halakha", "halakhah", "Jewish Law", "jewish law"},
display = "[[Jewish]] [[law]]",
topical_categories = true,
}
labels["Germanic paganism"] = {
aliases = {"Asatru", "Ásatrú", "Germanic neopaganism", "Germanic Paganism", "Heathenry", "heathenry", "Norse neopaganism", "Norse paganism"},
display = "[[Germanic#Adjective|Germanic]] [[paganism]]",
topical_categories = true,
}
labels["journalism"] = {
display = "[[journalism]]",
topical_categories = "Mass media",
}
labels["Judaism"] = {
display = "[[Judaism]]",
topical_categories = true,
}
labels["judo"] = {
display = "[[judo]]",
topical_categories = true,
}
labels["juggling"] = {
display = "[[juggling#Noun|juggling]]",
topical_categories = true,
}
labels["karuta"] = {
display = "[[karuta]]",
topical_categories = true,
}
labels["kendo"] = {
display = "[[kendo]]",
topical_categories = true,
}
labels["knitting"] = {
display = "[[knitting#Noun|knitting]]",
topical_categories = true,
}
labels["labour"] = {
aliases = {"labor", "labour movement", "labor movement"},
display = "[[labour]]",
topical_categories = true,
}
labels["lacrosse"] = {
display = "[[lacrosse]]",
topical_categories = true,
}
labels["law"] = {
aliases = {"legal"},
display = "[[law#English|law]]",
topical_categories = true,
}
labels["law enforcement"] = {
aliases = {"police", "policing"},
display = "[[law enforcement]]",
topical_categories = true,
}
labels["leftism"] = {
display = "[[leftism]]",
topical_categories = true,
}
labels["letterpress"] = {
aliases = {"metal type", "metal typesetting"},
display = "[[letterpress]] [[typography]]",
topical_categories = "Typography",
}
labels["lexicography"] = {
display = "[[lexicography]]",
topical_categories = true,
}
labels["LGBTQ"] = {
aliases = {"LGBT", "LGBT+", "LGBT*", "LGBTQ+", "LGBTQ*", "LGBTQIA", "LGBTQIA+", "LGBTQIA*"},
display = "[[LGBTQ]]",
topical_categories = true,
}
labels["liberalism"] = {
display = "[[liberalism]]",
topical_categories = true,
}
labels["library science"] = {
display = "[[library science]]",
topical_categories = true,
}
labels["lichenology"] = {
display = "[[lichenology]]",
topical_categories = true,
}
labels["limnology"] = {
display = "[[limnology]]",
topical_categories = "Ecology",
}
labels["lipid"] = {
display = "[[biochemistry]]",
topical_categories = "Lipids",
}
labels["linear algebra"] = {
aliases = {"vector algebra"},
display = "[[linear algebra]]",
topical_categories = true,
}
labels["linguistic morphology"] = {
display = "[[linguistic]] [[morphology]]",
topical_categories = true,
}
labels["linguistics"] = {
aliases = {"philology"},
display = "[[linguistics]]",
topical_categories = true,
}
labels["literature"] = {
display = "[[literature]]",
topical_categories = true,
}
labels["logic"] = {
display = "[[logic]]",
topical_categories = true,
}
labels["logistics"] = {
display = "[[logistics]]",
topical_categories = true,
}
labels["luge"] = {
display = "[[luge]]",
topical_categories = true,
}
labels["machining"] = {
display = "[[machining#Noun|machining]]",
topical_categories = true,
}
labels["machine learning"] = {
aliases = {"ML"},
display = "[[machine learning]]",
topical_categories = true,
}
labels["macroeconomics"] = {
display = "[[macroeconomics]]",
topical_categories = "Economics",
}
labels["mahjong"] = {
display = "[[mahjong]]",
topical_categories = true,
}
labels["malacology"] = {
display = "[[malacology]]",
topical_categories = true,
}
labels["mammalogy"] = {
display = "[[mammalogy]]",
topical_categories = true,
}
labels["management"] = {
display = "[[management]]",
topical_categories = true,
}
labels["manga"] = {
display = "[[manga]]",
topical_categories = "Japanese fiction",
}
labels["manhua"] = {
display = "[[manhua]]",
topical_categories = "Chinese fiction",
}
labels["manhwa"] = {
display = "[[manhwa]]",
topical_categories = "Korean fiction",
}
labels["Manichaeism"] = {
display = "[[Manichaeism]]",
topical_categories = true,
}
labels["manufacturing"] = {
display = "[[manufacturing#Noun|manufacturing]]",
topical_categories = true,
}
labels["Maoism"] = {
display = "[[Maoism]]",
topical_categories = true,
}
labels["marching"] = {
display = "[[marching#Noun|marching]]",
topical_categories = true,
}
labels["marine biology"] = {
aliases = {"coral science"},
display = "[[marine biology]]",
topical_categories = true,
}
labels["marketing"] = {
display = "[[marketing#Noun|marketing]]",
topical_categories = true,
}
labels["martial arts"] = {
display = "[[martial arts]]",
topical_categories = true,
}
labels["Marxism"] = {
display = "[[Marxism]]",
topical_categories = true,
}
labels["masonry"] = {
display = "[[masonry]]",
topical_categories = true,
}
labels["massage"] = {
display = "[[massage]]",
topical_categories = true,
}
labels["materials science"] = {
display = "[[materials science]]",
topical_categories = true,
}
labels["mathematical analysis"] = {
aliases = {"analysis"},
display = "[[mathematical analysis]]",
topical_categories = true,
}
labels["mathematics"] = {
aliases = {"math", "maths"},
display = "[[mathematics]]",
topical_categories = true,
}
labels["measure theory"] = {
display = "[[measure theory]]",
topical_categories = true,
}
labels["mechanical engineering"] = {
display = "[[mechanical engineering]]",
topical_categories = true,
}
labels["mechanics"] = {
display = "[[mechanics]]",
topical_categories = true,
}
labels["media"] = {
display = "[[media]]",
topical_categories = true,
}
labels["mediaeval folklore"] = {
aliases = {"medieval folklore"},
display = "[[mediaeval]] [[folklore]]",
topical_categories = "European folklore",
}
labels["medical genetics"] = {
display = "[[medical]] [[genetics]]",
topical_categories = true,
}
labels["medical sign"] = {
display = "[[medicine]]",
topical_categories = "Medical signs and symptoms",
}
labels["medicine"] = {
aliases = {"medical"},
display = "[[medicine]]",
topical_categories = true,
}
labels["Meitei god"] = {
display = "[[Meitei]] [[mythology]]",
topical_categories = "Meitei deities",
}
labels["mental health"] = {
display = "[[mental health]]",
topical_categories = true,
}
labels["Mesopotamian mythology"] = {
display = "[[Mesopotamian]] [[mythology]]",
topical_categories = true,
}
labels["metadata"] = {
display = "[[metadata]]",
topical_categories = "Data management",
}
labels["metallurgy"] = {
display = "[[metallurgy]]",
topical_categories = true,
}
labels["metalworking"] = {
display = "[[metalworking]]",
topical_categories = true,
}
labels["metaphysics"] = {
display = "[[metaphysics]]",
topical_categories = true,
}
labels["meteorology"] = {
display = "[[meteorology]]",
topical_categories = true,
}
labels["Methodism"] = {
aliases = {"Methodist", "methodism", "methodist"},
display = "[[Methodism]]",
topical_categories = true,
}
labels["metrology"] = {
display = "[[metrology]]",
topical_categories = true,
}
labels["microbiology"] = {
display = "[[microbiology]]",
topical_categories = true,
}
labels["microelectronics"] = {
display = "[[microelectronics]]",
topical_categories = true,
}
labels["micronationalism"] = {
display = "[[micronationalism]]",
topical_categories = true,
}
labels["microscopy"] = {
display = "[[microscopy]]",
topical_categories = true,
}
labels["military"] = {
display = "[[military]]",
topical_categories = true,
}
labels["mineralogy"] = {
display = "[[mineralogy]]",
topical_categories = true,
}
labels["mining"] = {
display = "[[mining#Noun|mining]]",
topical_categories = true,
}
labels["molecular biology"] = {
display = "[[molecular biology]]",
topical_categories = true,
}
labels["monarchy"] = {
display = "[[monarchy]]",
topical_categories = true,
}
labels["money"] = {
display = "[[money]]",
topical_categories = true,
}
labels["Mormonism"] = {
display = "[[Mormonism]]",
topical_categories = true,
}
labels["motorcycling"] = {
aliases = {"motorcycle", "motorcycles", "motorbike"},
display = "[[motorcycling#Noun|motorcycling]]",
topical_categories = "Motorcycles",
}
-- There are other types of racing, but 99% of the time "racing" on its own refers to motorsports
labels["motor racing"] = {
aliases = {"motor sport", "motorsport", "motorsports", "racing"},
display = "[[motor racing]]",
topical_categories = true,
}
labels["multiplicity"] = {
display = "{{l|en|multiplicity|id=multiple personalities}}",
topical_categories = "Multiplicity (psychology)",
}
labels["music"] = {
display = "[[music]]",
topical_categories = true,
}
labels["music industry"] = {
Wikipedia = "Music industry",
topical_categories = true,
}
labels["mycology"] = {
display = "[[mycology]]",
topical_categories = true,
}
labels["mythology"] = {
display = "[[mythology]]",
topical_categories = true,
}
labels["nanotechnology"] = {
display = "[[nanotechnology]]",
topical_categories = true,
}
labels["narratology"] = {
display = "[[narratology]]",
topical_categories = true,
}
labels["nautical"] = {
display = "[[nautical]]",
topical_categories = true,
}
labels["navigation"] = {
display = "[[navigation]]",
topical_categories = true,
}
labels["Nazism"] = { -- see also Neo-Nazism
aliases = {"nazism", "Nazi", "nazi", "Nazis", "nazis"},
Wikipedia = "Nazism",
topical_categories = true,
}
labels["nematology"] = {
display = "[[nematology]]",
topical_categories = "Zoology",
}
labels["neo-Nazism"] = { -- but also this is often used to indicate Nazi-used jargon; cf "white supremacist ideology"
aliases = {"Neo-Nazism", "Neo-nazism", "neo-nazism", "Neo-Nazi", "Neo-nazi", "neo-Nazi", "neo-nazi", "Neo-Nazis", "Neo-nazis", "neo-Nazis", "neo-nazis", "NeoNazism", "Neonazism", "neoNazism", "neonazism", "NeoNazi", "Neonazi", "neoNazi", "neonazi", "NeoNazis", "Neonazis", "neoNazis", "neonazis"},
Wikipedia = "Neo-Nazism",
topical_categories = true,
}
labels["netball"] = {
display = "[[netball]]",
topical_categories = true,
}
labels["networking"] = {
display = "[[networking#Noun|networking]]",
topical_categories = true,
}
labels["neuroanatomy"] = {
display = "[[neuroanatomy]]",
topical_categories = true,
}
labels["neurology"] = {
display = "[[neurology]]",
topical_categories = true,
}
labels["neuroscience"] = {
display = "[[neuroscience]]",
topical_categories = true,
}
labels["neurosurgery"] = {
display = "[[neurosurgery]]",
topical_categories = true,
}
labels["newspapers"] = {
display = "[[newspaper]]s",
topical_categories = true,
}
labels["Norse god"] = {
aliases = {"Norse goddess", "Norse deity"},
display = "[[Norse]] [[mythology]]",
topical_categories = "Norse deities",
}
labels["Norse mythology"] = {
display = "[[Norse]] [[mythology]]",
topical_categories = true,
}
labels["nuclear physics"] = {
display = "[[nuclear physics]]",
topical_categories = true,
}
labels["number theory"] = {
display = "[[number theory]]",
topical_categories = true,
}
labels["numismatics"] = {
display = "[[numismatics]]",
topical_categories = "Currency",
}
labels["nutrition"] = {
display = "[[nutrition]]",
topical_categories = true,
}
labels["object-oriented programming"] = {
aliases = {"object-oriented", "OOP"},
display = "[[object-oriented programming]]",
topical_categories = true,
}
labels["obstetrics"] = {
aliases = {"obstetric"},
display = "[[obstetrics]]",
topical_categories = true,
}
labels["occult"] = {
display = "[[occult]]",
topical_categories = true,
}
labels["oceanography"] = {
display = "[[oceanography]]",
topical_categories = true,
}
labels["oenology"] = {
display = "[[oenology]]",
topical_categories = true,
}
labels["oil industry"] = {
aliases = {"oil drilling"},
display = "[[w:Petroleum industry|oil industry]]",
topical_categories = true,
}
labels["oncology"] = {
display = "[[oncology]]",
topical_categories = true,
}
labels["online gaming"] = {
aliases = {"online games", "MMO", "MMORPG"},
display = "[[online]] [[gaming#Noun|gaming]]",
topical_categories = "Video games",
}
labels["opera"] = {
display = "[[opera]]",
topical_categories = true,
}
labels["operating systems"] = {
display = "[[operating system]]s",
topical_categories = "Software",
}
labels["ophthalmology"] = {
display = "[[ophthalmology]]",
topical_categories = true,
}
labels["optics"] = {
display = "[[optics]]",
topical_categories = true,
}
labels["organic chemistry"] = {
display = "[[organic chemistry]]",
topical_categories = true,
}
labels["ornithology"] = {
display = "[[ornithology]]",
topical_categories = true,
}
labels["orthodontics"] = {
display = "[[orthodontics]]",
topical_categories = "Dentistry",
}
labels["orthography"] = {
display = "[[orthography]]",
topical_categories = true,
}
labels["paganism"] = {
aliases = {"pagan", "neopagan", "neopaganism", "neo-pagan", "neo-paganism"},
display = "[[paganism]]",
topical_categories = true,
}
labels["pain"] = {
display = "[[medicine]]",
topical_categories = true,
}
labels["paintball"] = {
display = "[[paintball]]",
topical_categories = true,
}
labels["painting"] = {
display = "[[painting#Noun|painting]]",
topical_categories = true,
}
labels["palaeography"] = {
aliases = {"paleography"},
display = "[[palaeography]]",
topical_categories = true,
}
labels["paleontology"] = {
aliases = {"palaeontology"},
display = "[[paleontology]]",
topical_categories = true,
}
labels["palmistry"] = {
display = "[[palmistry]]",
topical_categories = true,
}
labels["palynology"] = {
display = "[[palynology]]",
topical_categories = true,
}
labels["parapsychology"] = {
display = "[[parapsychology]]",
topical_categories = true,
}
labels["parasitology"] = {
display = "[[parasitology]]",
topical_categories = true,
}
labels["particle physics"] = {
display = "[[particle physics]]",
topical_categories = true,
}
labels["pasteurisation"] = {
display = "[[pasteurisation]]",
topical_categories = true,
}
labels["patent law"] = {
aliases = {"patents"},
display = "[[patent#Noun|patent]] [[law]]",
topical_categories = true,
}
labels["pathology"] = {
display = "[[pathology]]",
topical_categories = true,
}
labels["pensions"] = {
display = "[[pension]]s",
topical_categories = true,
}
labels["pesäpallo"] = {
aliases = {"pesapallo"},
display = "[[pesäpallo]]",
topical_categories = true,
}
labels["petrochemistry"] = {
display = "[[petrochemistry]]",
topical_categories = true,
}
labels["petrology"] = {
display = "[[petrology]]",
topical_categories = true,
}
labels["pharmacology"] = {
display = "[[pharmacology]]",
topical_categories = true,
}
labels["pharmacy"] = {
display = "[[pharmacy]]",
topical_categories = true,
}
labels["pharyngology"] = {
display = "[[pharyngology]]",
topical_categories = true,
}
labels["philately"] = {
display = "[[philately]]",
topical_categories = true,
}
labels["philosophy"] = {
display = "[[philosophy]]",
topical_categories = true,
}
labels["phonetics"] = {
display = "[[phonetics]]",
topical_categories = true,
}
labels["phonology"] = {
display = "[[phonology]]",
topical_categories = true,
}
labels["photography"] = {
display = "[[photography]]",
topical_categories = true,
}
labels["phrenology"] = {
display = "[[phrenology]]",
topical_categories = true,
}
labels["physical chemistry"] = {
display = "[[physical chemistry]]",
topical_categories = true,
}
labels["physics"] = {
display = "[[physics]]",
topical_categories = true,
}
labels["physiology"] = {
display = "[[physiology]]",
topical_categories = true,
}
labels["phytopathology"] = {
display = "[[phytopathology]]",
topical_categories = true,
}
labels["pinball"] = {
display = "[[pinball]]",
topical_categories = true,
}
labels["planetology"] = {
display = "[[planetology]]",
topical_categories = true,
}
labels["playground games"] = {
aliases = {"playground game"},
display = "[[playground]] [[game]]s",
topical_categories = true,
}
labels["poetry"] = {
display = "[[poetry]]",
topical_categories = true,
}
labels["Pokémon"] = {
display = "''[[w:Pokémon|Pokémon]]''",
topical_categories = true,
}
labels["poker"] = {
display = "[[poker]]",
topical_categories = true,
}
labels["poker slang"] = {
display = "[[poker]] [[slang]]",
topical_categories = "Poker",
}
labels["political science"] = {
display = "[[political science]]",
topical_categories = true,
}
labels["politics"] = {
aliases = {"political"},
display = "[[politics]]",
topical_categories = true,
}
labels["Australian politics"] = {
display = "[[w:Politics of Australia|Australian politics]]",
topical_categories = true,
}
labels["Canadian politics"] = {
display = "[[w:Politics of Canada|Canadian politics]]",
topical_categories = true,
}
labels["European politics"] = {
display = "[[w:Politics of Europe|European politics]]",
topical_categories = true,
}
labels["EU politics"] = {
display = "[[w:Politics of the European Union|EU politics]]",
topical_categories = true,
}
labels["French politics"] = {
display = "[[w:Politics of France|French politics]]",
topical_categories = true,
}
labels["German politics"] = {
display = "[[w:Politics of Germany|German politics]]",
topical_categories = true,
}
labels["Hong Kong politics"] = {
aliases = {"HK politics"},
display = "[[w:Politics of Hong Kong|HK politics]]",
topical_categories = true,
}
labels["Indian politics"] = {
display = "[[w:Politics of India|Indian politics]]",
topical_categories = true,
}
labels["Indonesian politics"] = {
aliases = {"Indonesia politics"},
display = "[[w:Politics of Indonesia|Indonesian politics]]",
topical_categories = true,
}
labels["Irish politics"] = {
display = "[[w:Politics of the Republic of Ireland|Irish politics]]",
topical_categories = true,
}
labels["Malaysian politics"] = {
aliases = {"Malaysia politics"},
display = "[[w:Politics of Malaysia|Malaysian politics]]",
topical_categories = true,
}
labels["New Zealand politics"] = {
display = "[[w:Politics of New Zealand|New Zealand politics]]",
topical_categories = true,
}
labels["Pakistani politics"] = {
display = "[[w:Politics of Pakistan|Pakistani politics]]",
topical_categories = true,
}
labels["Palestinian politics"] = {
aliases = {"Palestine politics"},
display = "[[w:Politics of the Palestinian National Authority|Palestinian politics]]",
topical_categories = true,
}
labels["Philippine politics"] = {
aliases = {"Filipino politics"},
display = "[[w:Politics of the Philippines|Philippine politics]]",
topical_categories = true,
}
labels["Philmont Scout Ranch"] = {
aliases = {"Philmont"},
display = "[[w:Philmont Scout Ranch|Philmont Scout Ranch]]",
topical_categories = true,
}
labels["Spanish politics"] = {
display = "[[w:Politics of Spain|Spanish politics]]",
topical_categories = true,
}
labels["Swiss politics"] = {
display = "[[w:Politics of Switzerland|Swiss politics]]",
topical_categories = true,
}
labels["UK politics"] = {
display = "[[w:Politics of the United Kingdom|UK politics]]",
topical_categories = true,
}
labels["UN"] = {
display = "[[United Nations|UN]]",
topical_categories = "United Nations",
}
labels["US politics"] = {
display = "[[w:Politics of the United States|US politics]]",
topical_categories = true,
}
labels["pornography"] = {
aliases = {"porn", "porno"},
display = "[[pornography]]",
topical_categories = true,
}
labels["Portuguese folklore"] = {
display = "[[Portuguese#Adjective|Portuguese]] [[folklore]]",
topical_categories = "European folklore",
}
labels["post"] = {
display = "[[post#Etymology 2|post]]",
topical_categories = true,
}
labels["potential theory"] = {
display = "[[potential theory]]",
topical_categories = true,
}
labels["pottery"] = {
display = "[[pottery]]",
topical_categories = "Ceramics",
}
labels["pragmatics"] = {
display = "[[pragmatics]]",
topical_categories = true,
}
labels["printing"] = {
display = "[[printing#Noun|printing]]",
topical_categories = true,
}
labels["probability theory"] = {
display = "[[probability theory]]",
topical_categories = true,
}
labels["professional wrestling"] = {
aliases = {"pro wrestling"},
display = "[[professional wrestling]]",
topical_categories = true,
}
labels["programming"] = {
aliases = {"computer programming"},
display = "[[programming#Noun|programming]]",
topical_categories = true,
}
labels["property law"] = {
aliases = {"land law", "real estate law"},
display = "[[property law]]",
topical_categories = true,
}
labels["prosody"] = {
display = "[[prosody]]",
topical_categories = true,
}
labels["Protestantism"] = {
aliases = {"protestantism", "Protestant", "protestant"},
display = "[[Protestantism]]",
topical_categories = true,
}
labels["pseudoscience"] = {
display = "[[pseudoscience]]",
topical_categories = true,
}
labels["psychiatry"] = {
display = "[[psychiatry]]",
topical_categories = true,
}
labels["psychoanalysis"] = {
display = "[[psychoanalysis]]",
topical_categories = true,
}
labels["psychology"] = {
display = "[[psychology]]",
topical_categories = true,
}
labels["psychotherapy"] = {
display = "[[psychotherapy]]",
topical_categories = true,
}
labels["publishing"] = {
display = "[[publishing#Noun|publishing]]",
topical_categories = true,
}
labels["pulmonology"] = {
display = "[[pulmonology]]",
topical_categories = true,
}
labels["pyrotechnics"] = {
display = "[[pyrotechnics]]",
topical_categories = true,
}
labels["QAnon"] = {
aliases = {"Qanon"},
Wikipedia = "QAnon",
topical_categories = true,
}
labels["Quakerism"] = {
display = "[[Quakerism]]",
topical_categories = true,
}
labels["quantum computing"] = {
display = "[[quantum computing]]",
topical_categories = true,
}
labels["quantum mechanics"] = {
aliases = {"quantum physics"},
display = "[[quantum mechanics]]",
topical_categories = true,
}
-- TODO: What kind of topic is "radiation"? Is it specific kinds of radiation? That would be a set-type category.
labels["radiation"] = {
display = "[[physics]]",
topical_categories = true,
}
labels["radio"] = {
display = "[[radio]]",
topical_categories = true,
}
labels["Raëlism"] = {
display = "[[Raëlism]]",
topical_categories = true,
}
labels["rail transport"] = {
aliases = {"rail", "railroading", "railroads"},
display = "[[rail transport]]",
topical_categories = "Rail transportation",
}
labels["Rastafari"] = {
aliases = {"Rasta", "rasta", "Rastafarian", "rastafarian", "Rastafarianism"},
display = "[[Rastafari]]",
topical_categories = true,
}
labels["real estate"] = {
display = "[[real estate]]",
topical_categories = true,
}
labels["real tennis"] = {
display = "[[real tennis]]",
topical_categories = "Tennis",
}
labels["recreational mathematics"] = {
display = "[[recreational mathematics]]",
topical_categories = "Mathematics",
}
labels["Reddit"] = {
display = "[[Reddit]]",
topical_categories = true,
}
labels["regular expressions"] = {
aliases = {"regex"},
display = "[[regular expression]]s",
topical_categories = true,
}
labels["relativity"] = {
display = "[[relativity]]",
topical_categories = true,
}
labels["religion"] = {
display = "[[religion]]",
topical_categories = true,
}
labels["rhetoric"] = {
display = "[[rhetoric]]",
topical_categories = true,
}
labels["road transport"] = {
aliases = {"roads"},
display = "[[w:road transport|road transport]]",
topical_categories = true,
}
labels["robotics"] = {
display = "[[robotics]]",
topical_categories = true,
}
labels["rock paper scissors"] = {
topical_categories = true,
}
labels["roleplaying games"] = {
aliases = {"role playing games", "role-playing games", "RPG", "RPGs"},
display = "[[roleplaying game]]s",
topical_categories = "Role-playing games",
}
labels["roller derby"] = {
display = "[[roller derby]]",
topical_categories = true,
}
labels["Roman Catholicism"] = {
aliases = {"Roman Catholic", "Roman Catholic Church"},
display = "[[Roman Catholicism]]",
topical_categories = true,
}
labels["Roman Empire"] = {
display = "[[Roman Empire]]",
topical_categories = true,
}
labels["Roman mythology"] = {
display = "[[Roman]] [[mythology]]",
topical_categories = true,
}
labels["Roman numerals"] = {
display = "[[Roman numeral]]s",
topical_categories = true,
}
labels["roofing"] = {
display = "[[roofing#Noun|roofing]]",
topical_categories = true,
}
labels["rosiculture"] = {
display = "[[rosiculture]]",
topical_categories = true,
}
labels["rowing"] = {
display = "[[rowing#Noun|rowing]]",
topical_categories = true,
}
labels["Rubik's Cube"] = {
aliases = {"Rubik's cubes"},
display = "[[Rubik's Cube]]",
topical_categories = true,
}
labels["rugby"] = {
display = "[[rugby]]",
topical_categories = true,
}
labels["rugby league"] = {
display = "[[rugby league]]",
topical_categories = true,
}
labels["rugby union"] = {
display = "[[rugby union]]",
topical_categories = true,
}
labels["sailing"] = {
display = "[[sailing#Noun|sailing]]",
topical_categories = true,
}
labels["science fiction"] = {
aliases = {"scifi", "sci fi", "sci-fi"},
display = "[[science fiction]]",
topical_categories = true,
}
labels["sciences"] = {
aliases = {"science", "scientific"},
display = "[[sciences]]",
topical_categories = true,
}
labels["Scientology"] = {
display = "[[Scientology]]",
topical_categories = true,
}
-- Note: this is the usual term, not "Scottish law".
labels["Scots law"] = {
aliases = {"Scottish law", "Scotland law", "Scots Law", "Scottish Law", "Scotland Law"},
Wikipedia = true,
topical_categories = true,
}
labels["Scouting"] = {
aliases = {"scouting"},
display = "[[scouting]]",
topical_categories = true,
}
labels["Scrabble"] = {
display = "''[[Scrabble]]''",
topical_categories = true,
}
labels["scrapbooks"] = {
display = "[[scrapbook]]s",
topical_categories = true,
}
labels["sculpture"] = {
display = "[[sculpture]]",
topical_categories = true,
}
labels["seduction community"] = {
display = "[[w:Seduction community|seduction community]]",
topical_categories = true,
}
labels["seismology"] = {
display = "[[seismology]]",
topical_categories = true,
}
labels["semantics"] = {
display = "[[semantics]]",
topical_categories = true,
}
labels["semiotics"] = {
display = "[[semiotics]]",
topical_categories = true,
}
labels["semiconductors"] = {
display = "[[semiconductor]]s",
topical_categories = true,
}
labels["set theory"] = {
display = "[[set theory]]",
topical_categories = true,
}
labels["sewing"] = {
display = "[[sewing#Noun|sewing]]",
topical_categories = true,
}
labels["sex"] = {
display = "[[sex]]",
topical_categories = true,
}
labels["sexology"] = {
display = "[[sexology]]",
topical_categories = true,
}
labels["sex position"] = {
display = "[[sex]]",
topical_categories = "Sex positions",
}
labels["sexuality"] = {
display = "[[sexuality]]",
topical_categories = true,
}
labels["Shaivism"] = {
display = "[[Shaivism]]",
topical_categories = true,
}
labels["shamanism"] = {
aliases = {"Shamanism"},
display = "[[shamanism]]",
topical_categories = true,
}
labels["Shi'ism"] = {
aliases = {"Shia", "Shi'ite", "Shi'i"},
display = "[[Shia Islam]]",
topical_categories = true,
}
labels["Shinto"] = {
display = "[[Shinto]]",
topical_categories = true,
}
labels["ship parts"] = {
display = "[[nautical]]",
topical_categories = "Ship parts",
}
labels["shipping"] = {
display = "[[shipping#Noun|shipping]]",
topical_categories = true,
}
labels["shoemaking"] = {
display = "[[shoemaking]]",
topical_categories = true,
}
labels["shogi"] = {
display = "[[shogi]]",
topical_categories = true,
}
labels["signal processing"] = {
display = "[[w:Signal processing|signal processing]]",
topical_categories = true,
}
labels["Sikhism"] = {
display = "[[Sikhism]]",
topical_categories = true,
}
labels["singing"] = {
display = "[[singing#Noun|singing]]",
topical_categories = true,
}
labels["skateboarding"] = {
display = "[[skateboarding#Noun|skateboarding]]",
topical_categories = true,
}
labels["skating"] = {
display = "[[skating#Noun|skating]]",
topical_categories = true,
}
labels["skiing"] = {
display = "[[skiing#Noun|skiing]]",
topical_categories = true,
}
labels["Slavic god"] = {
display = "[[Slavic]] [[mythology]]",
topical_categories = "Slavic deities",
}
labels["Slavic mythology"] = {
display = "[[Slavic]] [[mythology]]",
topical_categories = true,
}
labels["smoking"] = {
display = "[[smoking#Noun|smoking]]",
topical_categories = true,
}
labels["snooker"] = {
display = "[[snooker#Noun|snooker]]",
topical_categories = true,
}
labels["snowboarding"] = {
display = "[[snowboarding#Noun|snowboarding]]",
topical_categories = true,
}
labels["soccer"] = {
aliases = {"football", "association football"},
display = "[[soccer]]",
topical_categories = "Football (soccer)",
}
labels["social sciences"] = {
aliases = {"social science"},
display = "[[social science]]s",
topical_categories = true,
}
labels["socialism"] = {
display = "[[socialism]]",
topical_categories = true,
}
labels["social media"] = {
display = "[[social media]]",
topical_categories = true,
}
labels["sociolinguistics"] = {
display = "[[sociolinguistics]]",
topical_categories = true,
}
labels["sociology"] = {
display = "[[sociology]]",
topical_categories = true,
}
labels["softball"] = {
display = "[[softball]]",
topical_categories = true,
}
labels["software"] = {
display = "[[software]]",
topical_categories = true,
}
labels["software architecture"] = {
display = "[[software architecture]]",
topical_categories = {"Software engineering", "Programming"},
}
labels["software engineering"] = {
aliases = {"software development"},
display = "[[software engineering]]",
topical_categories = true,
}
labels["soil science"] = {
display = "[[soil science]]",
topical_categories = true,
}
labels["sound"] = {
display = "[[sound#Noun|sound]]",
topical_categories = true,
}
labels["sound engineering"] = {
display = "[[sound engineering]]",
topical_categories = true,
}
labels["South Korean idol fandom"] = {
display = "[[South Korean]] [[idol]] [[fandom]]",
topical_categories = true,
}
labels["South Park"] = {
display = "''[[w:South Park|South Park]]''",
topical_categories = true,
}
labels["Soviet Union"] = {
aliases = {"USSR"},
display = "[[Soviet Union]]",
topical_categories = true,
}
labels["space flight"] = {
aliases = {"spaceflight", "space travel"},
display = "[[space flight]]",
topical_categories = "Space",
}
labels["space science"] = {
aliases = {"space"},
display = "[[space science]]",
topical_categories = "Space",
}
labels["spectroscopy"] = {
display = "[[spectroscopy]]",
topical_categories = true,
}
labels["speedrunning"] = {
aliases = {"speedrun", "speedruns"},
display = "[[speedrunning]]",
topical_categories = true,
}
labels["spinning"] = {
display = "[[spinning]]",
topical_categories = true,
}
labels["spiritualism"] = {
display = "[[spiritualism]]",
topical_categories = true,
}
labels["sports"] = {
aliases = {"sport"},
display = "[[sports]]",
topical_categories = true,
}
labels["squash"] = {
display = "[[w:squash (sport)|squash]]",
topical_categories = true,
}
labels["statistical mechanics"] = {
display = "[[statistical mechanics]]",
topical_categories = true,
}
labels["statistics"] = {
display = "[[statistics]]",
topical_categories = true,
}
labels["Star Wars"] = {
display = "''[[Star Wars]]''",
topical_categories = true,
}
labels["stock market"] = {
display = "[[stock market]]",
topical_categories = true,
}
labels["stock ticker symbol"] = {
aliases = {"stock symbol"},
display = "[[stock ticker symbol]]",
topical_categories = "Stock symbols for companies",
}
labels["subculture"] = {
display = "[[subculture]]",
topical_categories = "Culture",
}
labels["Sufism"] = {
aliases = {"Sufi Islam"},
display = "[[w:Sufism|Sufism]]",
topical_categories = true,
}
labels["sumo"] = {
display = "[[sumo]]",
topical_categories = true,
}
labels["supply chain"] = {
display = "[[supply chain]]",
topical_categories = true,
}
labels["surfing"] = {
display = "[[surfing#Noun|surfing]]",
topical_categories = true,
}
labels["surgery"] = {
display = "[[surgery]]",
topical_categories = true,
}
labels["surveying"] = {
display = "[[surveying#Noun|surveying]]",
topical_categories = true,
}
labels["sushi"] = {
display = "[[sushi]]",
topical_categories = true,
}
labels["swimming"] = {
display = "[[swimming#Noun|swimming]]",
topical_categories = true,
}
labels["swords"] = {
display = "[[sword]]s",
topical_categories = true,
}
labels["systematics"] = {
display = "[[systematics]]",
topical_categories = "Taxonomy",
}
labels["systems engineering"] = {
display = "[[systems engineering]]",
topical_categories = true,
}
labels["systems theory"] = {
display = "[[systems theory]]",
topical_categories = true,
}
labels["table tennis"] = {
display = "[[table tennis]]",
topical_categories = true,
}
labels["Taoism"] = {
aliases = {"Daoism"},
display = "[[Taoism]]",
topical_categories = true,
}
labels["tarot"] = {
display = "[[tarot]]",
topical_categories = "Cartomancy",
}
labels["taxation"] = {
aliases = {"tax", "taxes"},
display = "[[taxation]]",
topical_categories = true,
}
labels["taxonomy"] = {
display = "[[taxonomy]]",
topical_categories = true,
}
labels["technology"] = {
display = "[[technology]]",
topical_categories = true,
}
labels["telecommunications"] = {
aliases = {"telecommunication", "telecom"},
display = "[[telecommunications]]",
topical_categories = true,
}
labels["telegraphy"] = {
display = "[[telegraphy]]",
topical_categories = true,
}
labels["telephony"] = {
aliases = {"telephone", "telephones"},
display = "[[telephony]]",
topical_categories = true,
}
labels["television"] = {
aliases = {"TV"},
display = "[[television]]",
topical_categories = true,
}
labels["Tumblr aesthetic"] = {
display = "[[Tumblr]] aesthetic",
topical_categories = "Aesthetics",
}
labels["tennis"] = {
display = "[[tennis]]",
topical_categories = true,
}
labels["teratology"] = {
display = "[[teratology]]",
topical_categories = true,
}
labels["Tetris"] = {
display = "[[Tetris]]",
topical_categories = true,
}
labels["textiles"] = {
display = "[[textiles]]",
topical_categories = true,
}
labels["theater"] = {
aliases = {"theatre"},
display = "[[theater]]",
topical_categories = true,
}
labels["theology"] = {
display = "[[theology]]",
topical_categories = true,
}
labels["thermodynamics"] = {
display = "[[thermodynamics]]",
topical_categories = true,
}
labels["Tibetan Buddhism"] = {
display = "[[Tibetan Buddhism]]",
topical_categories = "Buddhism",
}
labels["tiddlywinks"] = {
display = "[[tiddlywinks]]",
topical_categories = true,
}
labels["TikTok aesthetic"] = {
display = "[[TikTok]] aesthetic",
topical_categories = "Aesthetics",
}
labels["time"] = {
display = "[[time]]",
topical_categories = true,
}
labels["topology"] = {
display = "[[topology]]",
topical_categories = true,
}
labels["tort law"] = {
display = "[[tort law]]",
topical_categories = "Law",
}
labels["tourism"] = {
display = "[[tourism]]",
topical_categories = true,
}
labels["toxicology"] = {
display = "[[toxicology]]",
topical_categories = true,
}
labels["trading"] = {
display = "[[trading#Noun|trading]]",
topical_categories = true,
}
labels["trading cards"] = {
display = "[[trading card]]s",
topical_categories = true,
}
labels["traditional Chinese medicine"] = {
aliases = {"TCM", "Chinese medicine"},
display = "[[traditional Chinese medicine]]",
topical_categories = true,
}
labels["traditional Korean medicine"] = {
aliases = {"Korean medicine"},
display = "{{w|traditional Korean medicine}}",
topical_categories = true,
}
labels["transgender"] = {
display = "[[transgender]]",
topical_categories = true,
}
labels["translation studies"] = {
display = "[[translation studies]]",
topical_categories = true,
}
labels["transport"] = {
aliases = {"transportation"},
display = "[[transport]]",
topical_categories = true,
}
labels["traumatology"] = {
display = "[[traumatology]]",
topical_categories = "Emergency medicine",
}
labels["travel"] = {
display = "[[travel]]",
topical_categories = true,
}
labels["trigonometry"] = {
display = "[[trigonometry]]",
topical_categories = true,
}
labels["trigonometric function"] = {
display = "[[trigonometry]]",
topical_categories = "Trigonometric functions",
}
labels["trust law"] = {
display = "[[trust law]]",
topical_categories = "Law",
}
labels["two-up"] = {
display = "[[two-up]]",
topical_categories = true,
}
labels["Twitter"] = {
aliases = {"twitter"},
display = "[[Twitter#Proper noun|Twitter]]",
topical_categories = true,
}
labels["typography"] = {
aliases = {"typesetting"},
display = "[[typography]]",
topical_categories = true,
}
labels["ufology"] = {
display = "[[ufology]]",
topical_categories = true,
}
labels["underwater diving"] = {
aliases = {"scuba", "scuba diving"},
display = "[[underwater]] [[diving#Noun|diving]]",
topical_categories = true,
}
labels["Unicode"] = {
aliases = {"Unicode standard"},
Wikipedia = true,
topical_categories = true,
}
labels["urban studies"] = {
aliases = {"urbanism", "urban planning"},
display = "[[urban studies]]",
topical_categories = true,
}
labels["urology"] = {
display = "[[urology]]",
topical_categories = true,
}
labels["Vaishnavism"] = {
display = "[[Vaishnavism]]",
topical_categories = true,
}
labels["Valentinianism"] = {
aliases = {"valentinianism"},
display = "[[w:Valentinianism|Valentinianism]]",
topical_categories = true,
}
labels["Vedic religion"] = {
aliases = {"Vedic Hinduism", "Ancient Hinduism", "ancient Hinduism", "Vedism", "Vedicism"},
display = "[[w:Historical Vedic religion|Vedic religion]]",
topical_categories = true,
}
labels["vegetable"] = {
aliases = {"vegetables"},
display = "[[vegetable]]",
topical_categories = "Vegetables",
}
labels["vehicles"] = {
aliases = {"vehicle"},
display = "[[vehicle]]s",
topical_categories = true,
}
labels["veterinary medicine"] = {
display = "[[veterinary medicine]]",
topical_categories = true,
}
labels["video compression"] = {
display = "[[w:Video compression|video compression]]",
topical_categories = true,
}
labels["video games"] = {
aliases = {"video game", "video gaming"},
display = "[[video game]]s",
topical_categories = true,
}
labels["virology"] = {
display = "[[virology]]",
topical_categories = true,
}
labels["virus"] = {
display = "[[virology]]",
topical_categories = "Viruses",
}
labels["viticulture"] = {
display = "[[viticulture]]",
topical_categories = {"Horticulture", "Wine"},
}
labels["volcanology"] = {
aliases = {"vulcanology"},
display = "[[volcanology]]",
topical_categories = true,
}
labels["volleyball"] = {
display = "[[volleyball]]",
topical_categories = true,
}
labels["voodoo"] = {
display = "[[voodoo]]",
topical_categories = true,
}
labels["water sports"] = {
aliases = {"watersport", "watersports", "water sport"},
display = "[[watersport|water sports]]",
topical_categories = true,
}
labels["weather"] = {
topical_categories = true,
}
labels["weaving"] = {
display = "[[weaving#Noun|weaving]]",
topical_categories = true,
}
labels["web design"] = {
display = "[[web design]]",
topical_categories = true,
aliases = {"Web design"}
}
labels["web development"] = {
display = "[[web development]]",
topical_categories = {"Programming", "Web design"},
}
labels["weightlifting"] = {
display = "[[weightlifting]]",
topical_categories = true,
}
labels["white supremacy"] = { -- but also this is often used to indicate white-supremacist-used jargon; cf "Nazism"
aliases = {"white nationalism", "white nationalist", "white power", "white racism", "white supremacist ideology", "white supremacism", "white supremacist"},
Wikipedia = "White supremacy",
topical_categories = "White supremacist ideology",
}
labels["wine"] = {
display = "[[wine]]",
topical_categories = true,
}
labels["winemaking"] = {
display = "[[winemaking]]",
topical_categories = "Wine",
}
labels["woodworking"] = {
display = "[[woodworking]]",
topical_categories = true,
}
labels["World War I"] = {
aliases = {"World War 1", "WWI", "WW I", "WW1", "WW 1"},
Wikipedia = "World War I",
topical_categories = true,
}
labels["World War II"] = {
aliases = {"World War 2", "WWII", "WW II", "WW2", "WW 2"},
Wikipedia = "World War II",
topical_categories = true,
}
labels["winter sports"] = {
display = "[[winter sport]]s",
topical_categories = true,
}
labels["wrestling"] = {
display = "[[wrestling#Noun|wrestling]]",
topical_categories = true,
}
labels["writing"] = {
display = "[[writing#Noun|writing]]",
topical_categories = true,
}
labels["Yazidism"] = {
aliases = {"Yezidism"},
display = "[[Yazidism]]",
topical_categories = true,
}
labels["yoga"] = {
display = "[[yoga]]",
topical_categories = true,
}
labels["zoology"] = {
display = "[[zoology]]",
topical_categories = true,
}
labels["zootomy"] = {
display = "[[zootomy]]",
topical_categories = "Animal body parts",
}
labels["Zoroastrianism"] = {
display = "[[Zoroastrianism]]",
topical_categories = true,
}
-- Labels with set-type categories
-- TODO: These are probably misuses of the label template, and should be deprecated
labels["amino acid"] = {
display = "[[biochemistry]]",
topical_categories = "Amino acids",
}
labels["architectural element"] = {
aliases = {"architectural elements"},
display = "[[architecture]]",
topical_categories = "Architectural elements",
}
labels["artistic work"] = {
display = "[[art#Noun|art]]",
topical_categories = "Artistic works",
}
labels["asterism"] = {
display = "[[uranography]]",
topical_categories = "Asterisms",
}
labels["biblical character"] = {
aliases = {"Biblical character", "biblical figure", "Biblical figure"},
display = "[[Bible|biblical]]",
topical_categories = "Biblical characters",
}
labels["bibliography"] = {
display = "[[bibliography]]",
topical_categories = true,
}
labels["bicycle parts"] = {
display = "[[w:List of bicycle parts|cycling]]",
topical_categories = true,
}
labels["book of the bible"] = {
display = "[[Bible|biblical]]",
topical_categories = "Books of the Bible",
}
labels["brass instruments"] = {
aliases = {"brass instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["canid"] = {
display = "[[zoology]]",
topical_categories = "Canids",
}
labels["carbohydrate"] = {
display = "[[biochemistry]]",
topical_categories = "Carbohydrates",
}
labels["carboxylic acid"] = {
display = "[[organic chemistry]]",
topical_categories = "Carboxylic acids",
}
labels["coenzyme"] = {
display = "[[biochemistry]]",
topical_categories = "Coenzymes",
}
labels["conspiracy theories"] = {
aliases = {"conspiracy theory", "conspiracy"},
display = "[[conspiracy theory#Noun|conspiracy theories]]",
topical_categories = true,
}
labels["constellation"] = {
display = "[[astronomy]]",
topical_categories = "Constellations",
}
labels["cookware"] = {
display = "[[cooking#Noun|cooking]]",
topical_categories = "Cookware and bakeware",
}
labels["currencies"] = { -- Don't merge with "numismatics", as the category is different.
aliases = {"currency"},
display = "[[numismatics]]",
topical_categories = "Currencies",
}
labels["dances"] = {
display = "[[dance#Noun|dance]]",
topical_categories = true,
}
labels["demonym"] = {
display = "[[demonym]]",
topical_categories = "Demonyms",
}
labels["disease"] = {
aliases = {"diseases"},
display = "[[pathology]]",
topical_categories = "Diseases",
}
labels["E number"] = {
display = "[[food]] [[manufacture]]",
plain_categories = "European food additive numbers",
}
labels["Egyptian god"] = {
display = "[[Egyptian]] [[mythology]]",
topical_categories = "Egyptian deities",
}
labels["element symbol"] = {
display = "[[chemistry]]",
plain_categories = "Symbols for chemical elements",
}
labels["enzyme"] = {
display = "[[biochemistry]]",
topical_categories = "Enzymes",
}
labels["fatty acid"] = {
display = "[[organic chemistry]]",
topical_categories = "Fatty acids",
}
labels["felid"] = {
aliases = {"cat"},
display = "[[zoology]]",
topical_categories = "Felids",
}
labels["fictional character"] = {
display = "[[fiction]]",
topical_categories = "Fictional characters",
}
labels["figure of speech"] = {
display = "[[rhetoric]]",
topical_categories = "Figures of speech",
}
labels["fish"] = {
display = "[[zoology]]",
topical_categories = true,
}
labels["footwear"] = {
display = "[[footwear]]",
topical_categories = true,
}
labels["functional group prefix"] = {
display = "[[organic chemistry]]",
topical_categories = "Functional group prefixes",
}
labels["functional group suffix"] = {
display = "[[organic chemistry]]",
topical_categories = "Functional group suffixes",
}
labels["functional programming"] = {
display = "[[functional programming]]",
topical_categories = "Programming",
}
labels["galaxy"] = {
display = "[[astronomy]]",
topical_categories = "Galaxies",
}
labels["genetic disorder"] = {
display = "[[medical]] [[genetics]]",
topical_categories = "Genetic disorders",
}
labels["Greek god"] = {
aliases = {"Greek goddess"},
display = "[[Greek]] [[mythology]]",
topical_categories = "Greek deities",
}
labels["hand games"] = {
aliases = {"hand game"},
display = "[[hand]] [[game]]s",
topical_categories = true,
}
labels["heraldic charge"] = {
aliases = {"heraldiccharge"},
display = "[[heraldry]]",
topical_categories = "Heraldic charges",
}
labels["Hindu god"] = {
display = "[[Hinduism]]",
topical_categories = "Hindu deities",
}
labels["historical currencies"] = {
aliases = {"historical currency"},
display = "[[numismatics]]",
sense_categories = "historical",
topical_categories = "Historical currencies",
}
labels["historical period"] = {
aliases = {"historical periods"},
display = "[[history]]",
topical_categories = "Historical periods",
}
labels["hormone"] = {
display = "[[biochemistry]]",
topical_categories = "Hormones",
}
labels["hydrocarbon chain prefix"] = {
display = "[[organic chemistry]]",
topical_categories = "Hydrocarbon chain prefixes",
}
labels["hydrocarbon chain suffix"] = {
display = "[[organic chemistry]]",
topical_categories = "Hydrocarbon chain suffixes",
}
labels["incoterm"] = {
display = "[[Incoterm]]",
topical_categories = "Incoterms",
}
labels["inorganic compound"] = {
display = "[[inorganic chemistry]]",
topical_categories = "Inorganic compounds",
}
labels["isotope"] = {
display = "[[physics]]",
topical_categories = "Isotopes",
}
labels["labour law"] = {
display = "[[labour law]]",
topical_categories = "Law",
}
labels["landforms"] = {
display = "[[geography]]",
topical_categories = true,
}
labels["logical fallacy"] = {
display = "[[rhetoric]]",
topical_categories = "Logical fallacies",
}
labels["lutherie"] = {
display = "[[lutherie]]",
topical_categories = true,
}
labels["Mesopotamian god"] = {
display = "[[Mesopotamian]] [[mythology]]",
topical_categories = "Mesopotamian deities",
}
labels["metamaterial"] = {
display = "[[physics]]",
topical_categories = "Metamaterials",
}
labels["military ranks"] = {
aliases = {"military rank"},
display = "[[military]]",
topical_categories = true,
}
labels["military unit"] = {
display = "[[military]]",
topical_categories = "Military units",
}
labels["mineral"] = {
display = "[[mineralogy]]",
topical_categories = "Minerals",
}
labels["mobile phones"] = {
aliases = {"cell phone", "cell phones", "mobile phone", "mobile telephony"},
display = "[[mobile telephone|mobile telephony]]",
topical_categories = true,
}
labels["muscle"] = {
display = "[[anatomy]]",
topical_categories = "Muscles",
}
labels["mushroom"] = {
aliases = {"mushrooms"},
display = "[[mycology]]",
topical_categories = "Mushrooms",
}
labels["musical instruments"] = {
aliases = {"musical instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["music genre"] = {
display = "[[music]]",
topical_categories = "Musical genres",
}
labels["musician"] = {
display = "[[music]]",
topical_categories = "Musicians",
}
labels["mythological creature"] = {
aliases = {"mythological creatures"},
display = "[[mythology]]",
topical_categories = "Mythological creatures",
}
labels["neurotoxin"] = {
display = "[[neurotoxicology]]",
topical_categories = "Neurotoxins",
}
labels["neurotransmitter"] = {
display = "[[biochemistry]]",
topical_categories = "Neurotransmitters",
}
labels["organic compound"] = {
display = "[[organic chemistry]]",
topical_categories = "Organic compounds",
}
labels["part of speech"] = {
display = "[[grammar]]",
topical_categories = "Parts of speech",
}
labels["particle"] = {
display = "[[physics]]",
topical_categories = "Subatomic particles",
}
labels["percussion instruments"] = {
aliases = {"percussion instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["pharmaceutical drug"] = {
display = "[[pharmacology]]",
topical_categories = "Pharmaceutical drugs",
}
labels["pharmaceutical effect"] = {
display = "[[pharmacology]]",
topical_categories = "Pharmaceutical effects",
}
labels["plant"] = {
display = "[[botany]]",
topical_categories = "Plants",
}
labels["plant disease"] = {
display = "[[phytopathology]]",
topical_categories = "Plant diseases",
}
labels["poison"] = {
display = "[[toxicology]]",
topical_categories = "Poisons",
}
labels["political subdivision"] = {
display = "[[government]]",
topical_categories = "Political subdivisions",
}
labels["protein"] = {
aliases = {"proteins"},
display = "[[biochemistry]]",
topical_categories = "Proteins",
}
labels["rock"] = {
display = "[[petrology]]",
topical_categories = "Rocks",
}
labels["Roman god"] = {
aliases = {"Roman goddess"},
display = "[[Roman]] [[mythology]]",
topical_categories = "Roman deities",
}
labels["schools"] = {
display = "[[education]]",
topical_categories = true,
}
labels["self-harm"] = {
aliases = {"selfharm", "self harm", "self-harm community"},
display = "[[self-harm]]",
topical_categories = true,
}
labels["SEO"] = {
display = "[[search engine optimization|SEO]]",
topical_categories = {"Internet", "Marketing"},
}
labels["skeleton"] = {
display = "[[anatomy]]",
topical_categories = true,
}
labels["standard of identity"] = {
display = "[[standard of identity|standards of identity]]",
topical_categories = "Standards of identity",
}
labels["star"] = {
display = "[[astronomy]]",
topical_categories = "Stars",
}
labels["steroid"] = {
display = "[[biochemistry]]",
topical_categories = "Steroids",
}
labels["steroid hormone"] = {
aliases = {"steroid drug"},
display = "[[biochemistry]], [[steroids]]",
topical_categories = "Hormones",
}
labels["string instruments"] = {
aliases = {"string instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["surface feature"] = {
display = "[[planetology]]",
topical_categories = "Planetary nomenclature",
}
labels["sugar acid"] = {
display = "[[organic chemistry]]",
topical_categories = "Sugar acids",
}
labels["symptom"] = {
display = "[[medicine]]",
topical_categories = "Medical signs and symptoms",
}
labels["taxonomic name"] = {
display = "[[taxonomy]]",
topical_categories = "Taxonomic names",
}
labels["tincture"] = {
display = "[[heraldry]]",
topical_categories = "Heraldic tinctures",
}
labels["veterinary disease"] = {
display = "[[veterinary medicine]]",
topical_categories = "Veterinary diseases",
}
labels["video game genre"] = {
display = "[[video game]]s",
topical_categories = "Video game genres",
}
labels["vitamin"] = {
display = "[[biochemistry]]",
topical_categories = "Vitamins",
}
labels["watercraft"] = {
display = "[[nautical]]",
topical_categories = true,
}
labels["weaponry"] = {
aliases = {"weapon", "weapons"},
display = "[[weaponry]]",
topical_categories = "Weapons",
}
labels["Wicca"] = {
display = "[[Wicca]]",
topical_categories = true,
}
labels["wiki jargon"] = {
aliases = {"wiki"},
display = "[[wiki]] [[jargon]]",
topical_categories = "Wiki",
}
labels["Wikimedia jargon"] = {
aliases = {"WMF", "WMF jargon", "Wiktionary", "Wiktionary jargon", "Wikipedia", "Wikipedia jargon"},
display = "[[w:Wikimedia Foundation|Wikimedia]] [[jargon]]",
topical_categories = "Wikimedia",
}
labels["wind instruments"] = {
aliases = {"wind instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["woodwind instruments"] = {
aliases = {"woodwind instrument"},
display = "[[music]]",
topical_categories = true,
}
labels["xiangqi"] = {
aliases = {"Chinese chess"},
display = "[[xiangqi]]",
topical_categories = true,
}
labels["yoga pose"] = {
aliases = {"asana"},
display = "[[yoga]]",
topical_categories = "Yoga poses",
}
labels["zodiac constellations"] = {
display = "[[astronomy]]",
topical_categories = "Constellations in the zodiac",
}
-- Deprecated/do not use warning (ambiguous, unsuitable etc)
labels["deprecated label"] = {
aliases = {"emergency", "greekmyth", "industry", "morphology", "musici", "quantum", "vector"},
display = "<span style=\"color:red;\"><b>deprecated label</b></span>",
deprecated = true,
}
return require("Module:labels").finalize_data(labels)
i5b6jn88w4d7gmqyko8bl7cviazqsmm
සැකිල්ල:delete/documentation
10
13433
193423
167435
2024-11-21T09:55:50Z
Lee
19
/* අමතර අවධානයට */
193423
wikitext
text/x-wiki
{{documentation subpage}}
මෙම සැකිල්ල {{tl|මකන්න}} හෝ {{tl|delete}} ආකාරයෙන් ලිපිවලට එක් කළ හැකි අතර, එය පිටුවලට යෙදීමෙන් ඒවා [[:ප්රවර්ගය:delete|delete]] ප්රවර්ගයට එකතු වේ.
පරිපාලකවරුනි, කරුණාකර ලිපිය මැකීමට ප්රථම පිටුවේ ඉතිහාසය හා ලිපියට සබැඳුම් විමසා බලන්න.
[[ප්රවර්ගය:ලිපි මාතෘකා සැකිලි|{{PAGENAME}}]]
== භාවිතය ==
{{shortcut|Template:d}}
This template is intended to notify a [[WT:A|sysop]] that there is junk that very obviously should be removed; it does so by adding the entry to [[:Category:Candidates for speedy deletion]], which is periodically checked by various sysops. If there is any possibility that the entry should maybe be retained for any reason whatsoever, then use {{temp|rfd}} instead. If you think it is nonsense, but there ''might'' be someone who thinks it is a genuine contribution, you are much better off using {{tl|rfd}}.
Do not blank the page before adding this template. When you add it to a page, click on the "What links here" tool to correct any incoming links.
It is generally better to improve a poorly formatted or defined entry than to delete it. If you are unsure what the definition should look like but you are sure it is a valid word, you can replace the incorrect definition with {{tl|rfdef}}.
If you are certain that an entry does not meet our [[WT:CFI|criteria for inclusion]], you can use this <nowiki>{{delete|}}</nowiki> template to flag the entry.
Please always add parameter one after the vertical bar "|".
=== උදාහරණ ===
*<nowiki>{{delete|misspelling of [[x]]}}</nowiki>
*<nowiki>{{delete|nonsense}}</nowiki>
*<nowiki>{{delete|encyclopedic}}</nowiki>
*<nowiki>{{delete|personal attack}}</nowiki>
=== අමතර අවධානයට ===
* {{clc|අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ}}
* [[:ප්රවර්ගය:ඉක්මන් මකා දැමීම සඳහා යෝජිතයෝ]]
* [[MediaWiki:Deletereason-dropdown|List of reasons for deletion]]
<includeonly>
[[ප්රවර්ගය:මකා දැමීම් සැකිලි]]
[[ප්රවර්ගය:ඉල්ලීම් සැකිලි]]
</includeonly>
8b4k9qrax6an73o42dohz54406nu9dr
සැකිල්ල:Han ref
10
13678
193362
42200
2024-11-21T07:49:57Z
Lee
19
193362
wikitext
text/x-wiki
{{#if:{{{kx|}}}|
* Kangxi ශබ්දකෝෂය: {{#ifexpr:(({{{kx}}} * 1000) round 0) mod 10|''not present'', would follow}} {{Han KangXi link|{{#expr:(({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 1000)) / 1000}}}}, අනුලක්ෂණය {{#expr:(((({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{dkj|}}}|
* Dai Kanwa Jiten: character {{#expr:{{{dkj}}}+0}}}}{{#if:{{{dj|}}}|
* Dae Jaweon: {{#ifexpr:(({{{dj}}} * 1000) round 0) mod 10|''not present'', would follow}} page {{#expr:(({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 1000)) / 1000}}, character {{#expr:(((({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{hdz|}}}|
* Hanyu Da Zidian (first edition): {{#ifexpr:(({{{hdz}}} * 1000) round 0) mod 10|''not present'', would follow}} volume {{#expr:(({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 10000000)) / 10000000}}{{#ifexpr:{{{hdz}}}>=80000| (in addendum)}}, page {{#expr:((({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 1000)) / 1000) mod 10000}}, character {{#expr:(((({{{hdz}}} * 1000) - (({{{hdz}}} * 1000) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{uh|}}}|
* [https://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint={{{uh|}}} Unihan data for U+{{{uh|}}}]}}<noinclude>{{documentation}}</noinclude>
6xgr38y2tzcu48mppueg8zp1mcwnm9o
193363
193362
2024-11-21T07:50:25Z
Lee
19
193363
wikitext
text/x-wiki
{{#if:{{{kx|}}}|
* Kangxi ශබ්දකෝෂය: {{#ifexpr:(({{{kx}}} * 1000) round 0) mod 10|''not present'', would follow}} {{Han KangXi link|{{#expr:(({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 1000)) / 1000}}}}, අනුලක්ෂණය {{#expr:(((({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{dkj|}}}|
* Dai Kanwa Jiten: අනුලක්ෂණය {{#expr:{{{dkj}}}+0}}}}{{#if:{{{dj|}}}|
* Dae Jaweon: {{#ifexpr:(({{{dj}}} * 1000) round 0) mod 10|''not present'', would follow}} page {{#expr:(({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 1000)) / 1000}}, character {{#expr:(((({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{hdz|}}}|
* Hanyu Da Zidian (first edition): {{#ifexpr:(({{{hdz}}} * 1000) round 0) mod 10|''not present'', would follow}} volume {{#expr:(({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 10000000)) / 10000000}}{{#ifexpr:{{{hdz}}}>=80000| (in addendum)}}, page {{#expr:((({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 1000)) / 1000) mod 10000}}, character {{#expr:(((({{{hdz}}} * 1000) - (({{{hdz}}} * 1000) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{uh|}}}|
* [https://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint={{{uh|}}} Unihan data for U+{{{uh|}}}]}}<noinclude>{{documentation}}</noinclude>
3tnl71hcap1p4hgv7i3ejbidrx93mof
193364
193363
2024-11-21T07:50:56Z
Lee
19
193364
wikitext
text/x-wiki
{{#if:{{{kx|}}}|
* Kangxi ශබ්දකෝෂය: {{#ifexpr:(({{{kx}}} * 1000) round 0) mod 10|''not present'', would follow}} {{Han KangXi link|{{#expr:(({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 1000)) / 1000}}}}, අනුලක්ෂණය {{#expr:(((({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{dkj|}}}|
* Dai Kanwa Jiten: අනුලක්ෂණය {{#expr:{{{dkj}}}+0}}}}{{#if:{{{dj|}}}|
* Dae Jaweon: {{#ifexpr:(({{{dj}}} * 1000) round 0) mod 10|''not present'', would follow}} පිටුව {{#expr:(({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 1000)) / 1000}}, අනුලක්ෂණය {{#expr:(((({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{hdz|}}}|
* Hanyu Da Zidian (first edition): {{#ifexpr:(({{{hdz}}} * 1000) round 0) mod 10|''not present'', would follow}} volume {{#expr:(({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 10000000)) / 10000000}}{{#ifexpr:{{{hdz}}}>=80000| (in addendum)}}, page {{#expr:((({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 1000)) / 1000) mod 10000}}, character {{#expr:(((({{{hdz}}} * 1000) - (({{{hdz}}} * 1000) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{uh|}}}|
* [https://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint={{{uh|}}} Unihan data for U+{{{uh|}}}]}}<noinclude>{{documentation}}</noinclude>
ce9u3mqxbulmgxvjwnk2lxdfjuli0oi
193365
193364
2024-11-21T07:51:44Z
Lee
19
193365
wikitext
text/x-wiki
{{#if:{{{kx|}}}|
* Kangxi ශබ්දකෝෂය: {{#ifexpr:(({{{kx}}} * 1000) round 0) mod 10|''not present'', would follow}} {{Han KangXi link|{{#expr:(({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 1000)) / 1000}}}}, අනුලක්ෂණය {{#expr:(((({{{kx}}} * 1000) - ((({{{kx}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{dkj|}}}|
* Dai Kanwa Jiten: අනුලක්ෂණය {{#expr:{{{dkj}}}+0}}}}{{#if:{{{dj|}}}|
* Dae Jaweon: {{#ifexpr:(({{{dj}}} * 1000) round 0) mod 10|''not present'', would follow}} පිටුව {{#expr:(({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 1000)) / 1000}}, අනුලක්ෂණය {{#expr:(((({{{dj}}} * 1000) - ((({{{dj}}} * 1000) round 0) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{hdz|}}}|
* Hanyu Da Zidian (පළමුවන සංස්කරණය): {{#ifexpr:(({{{hdz}}} * 1000) round 0) mod 10|''not present'', would follow}} වෙලුම {{#expr:(({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 10000000)) / 10000000}}{{#ifexpr:{{{hdz}}}>=80000| (in addendum)}}, පිටුව {{#expr:((({{{hdz}}} * 1000) - ((({{{hdz}}} * 1000) round 0) mod 1000)) / 1000) mod 10000}}, අනුලක්ෂණය {{#expr:(((({{{hdz}}} * 1000) - (({{{hdz}}} * 1000) mod 10)) / 10) round 0) mod 100}}}}{{#if:{{{uh|}}}|
* [https://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint={{{uh|}}} Unihan data for U+{{{uh|}}}]}}<noinclude>{{documentation}}</noinclude>
00otsd41l1snqeneor160hs3y9v7q10
Module:category tree/poscatboiler/data/affixes and compounds
828
13840
193605
193229
2024-11-21T11:46:45Z
Lee
19
උපසර්ග
193605
Scribunto
text/plain
local labels = {}
local raw_categories = {}
local handlers = {}
local m_sinhala = require("Module:sinhala")
-----------------------------------------------------------------------------
-- --
-- LABELS --
-- --
-----------------------------------------------------------------------------
labels["alliterative compounds"] = {
description = "{{{langname}}} noun phrases composed of two or more stems that alliterate.",
parents = {"සංයුක්ත පද", "alliterative phrases"},
}
labels["antonymous compounds"] = {
description = "{{{langname}}} compounds in which one part is an antonym of the other.",
parents = {"dvandva compounds", sort = "antonym"},
}
labels["bahuvrihi compounds"] = {
description = "{{{langname}}} compounds in which the first part (A) modifies the second (B), and whose meaning follows a [[metonymic]] pattern: “<person> having a B that is A.”",
parents = {"සංයුක්ත පද", "exocentric compounds"},
}
-- Add "compound POS" categories for various parts of speech.
local compound_poses = {
"adjectives",
"adverbs",
"conjunctions",
"determiners",
"interjections",
"නාම පද",
"numerals",
"particles",
"postpositions",
"උපසර්ග",
"prepositions",
"pronouns",
"proper nouns",
"suffixes",
"verbs",
}
for _, pos in ipairs(compound_poses) do
labels["සංයුක්ත " .. pos] = {
description = "{{{langname}}} " .. pos .. " composed of two or more stems.",
parents = {{name = "සංයුක්ත පද", sort = " "}, pos},
}
end
labels["compound determinatives"] = {
description = "{{{langname}}} determinatives composed of two or more stems.",
parents = {"සංයුක්ත පද", "determiners"},
}
labels["සංයුක්ත පද"] = {
description = "{{{langname}}} terms composed of two or more stems.",
umbrella_parents = "භාෂාව අනුව යෙදුම්, නිරුක්ති උප ප්රවර්ග අනුව",
parents = {"යෙදුම්, නිරුක්තිය අනුව"},
}
labels["dvandva compounds"] = {
description = "{{{langname}}} terms composed of two or more stems whose stems could be connected by an 'and'.",
parents = {"සංයුක්ත පද"},
}
labels["dvigu compounds"] = {
description = "{{{langname}}} [[tatpuruṣa]] compounds where the modifying member is a number",
parents = {"tatpurusa compounds"},
}
labels["endocentric compounds"] = {
description = "{{{langname}}} terms composed of two or more stems, one of which is the [[w:head (linguistics)|head]] of that compound.",
parents = {"සංයුක්ත පද"},
}
labels["endocentric noun-noun compounds"] = {
description = "{{{langname}}} terms composed of two or more stems, one of which is the [[w:head (linguistics)|head]] of that compound.",
breadcrumb = "noun-noun",
parents = {"endocentric compounds", "සංයුක්ත පද"},
}
labels["endocentric verb-noun compounds"] = {
description = "{{{langname}}} compounds in which the first element is a verbal stem, the second a nominal stem and the head of the compound.",
breadcrumb = "verb-noun",
parents = {"endocentric compounds", "verb-noun compounds"},
}
labels["exocentric compounds"] = {
description = "{{{langname}}} terms composed of two or more stems, none of which is the [[w:head (linguistics)|head]] of that compound.",
parents = {"සංයුක්ත පද"},
}
labels["exocentric verb-noun compounds"] = {
description = "{{{langname}}} compounds in which the first element is a transitive verb, the second a noun functioning as its direct object, and whose referent is the person or thing doing the action.",
breadcrumb = "verb-noun",
parents = {"exocentric compounds", "verb-noun compounds"},
}
labels["karmadharaya compounds"] = {
description = "{{{langname}}} terms composed of two or more stems in which the main stem determines the case endings.",
parents = {"tatpurusa compounds"},
}
labels["itaretara dvandva compounds"] = {
description = "{{{langname}}} terms composed of two or more stems whose stems could be connected by an 'and'.",
breadcrumb = "itaretara",
parents = {"dvandva compounds"},
}
labels["rhyming compounds"] = {
description = "{{{langname}}} noun phrases composed of two or more stems that rhyme.",
parents = {"සංයුක්ත පද", "rhyming phrases"},
}
labels["samahara dvandva compounds"] = {
description = "{{{langname}}} terms composed of two or more stems whose stems could be connected by an 'and'.",
breadcrumb = "samahara",
parents = {"dvandva compounds"},
}
labels["shitgibbons"] = {
description = "{{{langname}}} terms that consist of a single-syllable [[expletive]] followed by a two-syllable [[trochee]] that serves as a [[nominalizer]] or [[intensifier]].",
parents = {"endocentric compounds"},
}
labels["synonymous compounds"] = {
description = "{{{langname}}} compounds in which one part is a synonym of the other.",
parents = {"dvandva compounds", sort = "synonym"},
}
labels["tatpurusa compounds"] = {
description = "{{{langname}}} terms composed of two or more stems",
parents = {"සංයුක්ත පද"},
}
labels["verb-noun compounds"] = {
description = "{{{langname}}} compounds in which the first element is a transitive verb, the second a noun functioning as its direct object, and whose referent is the person or thing doing the action, or an adjective describing such a person or thing.",
parents = {"verb-object compounds"},
}
labels["verb-object compounds"] = {
description = "{{{langname}}} compounds in which the first element is a transitive verb, the second a term (usually but not always a noun) functioning as its (normally direct) object, and whose referent is the person or thing doing the action, or an adjective describing such a person or thing.",
additional = "Examples in English are {{m|en|pickpocket|lit=someone who picks pockets}} and {{m|en|catch-all|lit=something that catches everything}}.",
parents = {"සංයුක්ත පද"},
}
labels["verb-verb compounds"] = {
description = "{{{langname}}} compounds composed of two or more verbs in apposition, often either synonyms or antonyms, and whose referent refers to the result of performing those actions.",
parents = {"සංයුක්ත පද"},
}
labels["vrddhi derivatives"] = {
description = "{{{langname}}} terms derived from a Proto-Indo-European root by the process of [[w:vṛddhi|vṛddhi]] derivation.",
parents = {"යෙදුම්, නිරුක්තිය අනුව"},
}
labels["vrddhi gerundives"] = {
description = "{{{langname}}} [[gerundive]]s derived from a Proto-Indo-European root by the process of [[w:vṛddhi|vṛddhi]] derivation.",
parents = {"vrddhi derivatives"},
}
labels["vyadhikarana compounds"] = {
description = "{{{langname}}} terms composed of two or more stems in which the non-main stem determines the case endings.",
parents = {"tatpurusa compounds"},
}
for _, fixtype in ipairs({"circumfix", "infix", "interfix", "prefix", "suffix",}) do
labels["යෙදුම්, " .. m_sinhala.sinhala(fixtype .. "es") .. " අනුව"] = {
description = "{{{langname}}} යෙදුම්, ඒවායේ " .. m_sinhala.sinhala(fixtype .. "es") .. " වලට අනුව කාණ්ඩ වලට වෙන්කොට ඇති.",
umbrella_parents = "භාෂාව අනුව යෙදුම්, නිරුක්ති උප ප්රවර්ග අනුව",
parents = {{name = "යෙදුම්, නිරුක්තිය අනුව", sort = fixtype}, m_sinhala.sinhala(fixtype .. "es")},
}
end
-- Add 'umbrella_parents' key if not already present.
for key, data in pairs(labels) do
-- NOTE: umbrella.parents overrides umbrella_parents if both are given.
if not data.umbrella_parents then
data.umbrella_parents = "Types of compound terms by language"
end
end
-----------------------------------------------------------------------------
-- --
-- RAW CATEGORIES --
-- --
-----------------------------------------------------------------------------
raw_categories["Types of compound terms by language"] = {
description = "Umbrella categories covering topics related to types of compound terms.",
additional = "{{{umbrella_meta_msg}}}",
parents = {
"ඡත්ර මෙටා ප්රවර්ග",
{name = "සංයුක්ත පද", is_label = true, sort = " "},
{name = "භාෂාව අනුව යෙදුම්, නිරුක්ති උප ප්රවර්ග අනුව", sort = " "},
},
}
-----------------------------------------------------------------------------
-- --
-- HANDLERS --
-- --
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
------------------------------ Affix handlers -------------------------------
-----------------------------------------------------------------------------
table.insert(handlers, function(data)
local labelpref, pos, zz_term_and, affixtype, zz_and_id = data.label:match("^((.*), (.+) (.*) සහිත)(.*)$")
local term_and_id
if zz_term_and ~= nil then
term_and_id = zz_term_and
if zz_and_id ~= nil then
term_and_id = term_and_id .. zz_and_id
end
end
if labelpref ~= nil then
-- නව ආකාරය සඳහා අවශ්ය වෙනස
labelpref = labelpref:gsub(zz_term_and, "%%s")
end
if pos == "යෙදුම්" then
pos = "terms"
end
if affixtype == "ප්රත්ය" then
affixtype = "suffix"
end
if affixtype == "උපසර්ග" then
affixtype = "prefix"
end
if affixtype then
local term, id = term_and_id:match("^(.+) %(([^()]+)%)$")
term = term or term_and_id
-- Convert term/alt into affixes if needed
local desc = {
["prefix"] = ", % උපසර්ගයෙන් ආරම්භ වන",
["suffix"] = ", %s ප්රත්යයෙන් අවසන් වන",
["circumfix"] = "bookended with the circumfix",
["infix"] = "spliced with the infix",
["interfix"] = "joined with the interfix",
-- Transfixes not supported currently.
-- ["transfix"] = "patterned with the transfix",
}
if not desc[affixtype] then
return nil
end
-- Here, {LANG} is replaced with the actual language, {TERM_AND_ID} with the actual term (or with 'TERM<id:ID>'
-- if there is an ID), {BASE} with '<var>base</var>', {BASE2} with '<var>base2</var>', {BASE_EXPL} with an
-- explanation of what "base" means, {BASE_BASE2_EXPL} with an explanation of what "base" and "base2" mean, and
-- {POS} with '|pos=POS' if there is a `pos` other than "terms", otherwise a blank string.
local what_categorizes = {
["prefix"] = "{{tl|af|{LANG}|{TERM_AND_ID}|{BASE}{POS}}} or {{tl|affix|{LANG}|{TERM_AND_ID}|{BASE}{POS}}} (හෝ වැඩි-කැමැත්තක්-නොදක්වන ආකාර වන {{tl|pre}} හෝ {{tl|prefix}}) මගින් සිදු කරයි. මෙහි {BASE_EXPL}",
["suffix"] = "{{tl|af|{LANG}|{BASE}|{TERM_AND_ID}{POS}}} or {{tl|affix|{LANG}|{BASE}|{TERM_AND_ID}{POS}}} (හෝ වැඩි-කැමැත්තක්-නොදක්වන ආකාර වන {{tl|suf}} හෝ {{tl|suffix}}) මගින් සිදු කරයි. මෙහි {BASE_EXPL}",
["circumfix"] = "{{tl|af|{LANG}|{BASE}|{TERM_AND_ID}{POS}}} or {{tl|affix|{LANG}|{BASE}|{TERM_AND_ID}{POS}}}, where {BASE_EXPL}",
["infix"] = "{{tl|infix|{LANG}|{BASE}|{TERM_AND_ID}{POS}}}, where {BASE_EXPL}",
["interfix"] = "{{tl|af|{LANG}|{BASE}|{TERM_AND_ID}{POS}|{BASE2}}} or {{tl|affix|{LANG}|{BASE}|{TERM_AND_ID}|{BASE2}{POS}}}, where {BASE_BASE2_EXPL}",
}
local args = require("Module:parameters").process(data.args, {
["alt"] = true,
["sc"] = true,
["sort"] = true,
["tr"] = true,
["ts"] = true,
})
local sc = data.sc or args.sc and require("Module:scripts").getByCode(args.sc, "sc") or nil
local m_affix = require("Module:affix")
-- Call make_affix to add display hyphens if they're not already present.
local _, display_term, lookup_term = m_affix.make_affix(term, data.lang, sc, affixtype, nil, true)
local _, display_alt = m_affix.make_affix(args.alt, data.lang, sc, affixtype)
local _, display_tr = m_affix.make_affix(args.tr, data.lang, require("Module:scripts").getByCode("Latn"), affixtype)
local _, display_ts = m_affix.make_affix(args.ts, data.lang, require("Module:scripts").getByCode("Latn"), affixtype)
local m_script_utilities = require("Module:script utilities")
local id_text = id and " (" .. id .. ")" or ""
-- Compute parents.
local parents = {}
if id then
if pos == "words" then
-- don't allow formerly-named categories with "words"
return nil
end
if pos == "terms" then
table.insert(parents, {name = (labelpref):format(term), sort = id, args = args})
else
table.insert(parents, {name = "terms " .. affixtype .. "ed with " .. term_and_id, sort = id .. ", " .. pos, args = args})
table.insert(parents, {name = (labelpref):format(term), sort = id, args = args})
end
elseif pos == "words" then
-- don't allow formerly-named categories with "words"
return nil
elseif pos ~= "terms" then
table.insert(parents, {name = "terms " .. affixtype .. "ed with " .. term, sort = pos, args = args})
end
table.insert(parents, {name = "යෙදුම්, " .. m_sinhala.sinhala(affixtype .. "es") .. " අනුව", sort = (data.lang:makeSortKey((data.lang:makeEntryName(args.sort or term))))})
-- If other affixes are mapped to this one, show them.
local additional
if data.lang then
local langcode = data.lang:getCode()
if m_affix.langs_with_lang_specific_data[langcode] then
local langdata = mw.loadData(m_affix.affix_lang_data_module_prefix .. langcode)
local variants = {}
if langdata.affix_mappings then
for variant, canonical in pairs(langdata.affix_mappings) do
-- Above, we converted the stripped link term as we received it to the lookup form, so we
-- can look up the variants that are mapped to this term. Once we find them, map them to
-- display form.
local is_variant = false
if type(canonical) == "table" then
for _, canonical_v in pairs(canonical) do
if canonical_v == lookup_term then
is_variant = true
break
end
end
else
is_variant = canonical == lookup_term
end
if is_variant then
local _, display_variant = m_affix.make_affix(variant, data.lang, sc, affixtype)
table.insert(variants, "{{m|" .. langcode .. "|" .. display_variant .. "}}")
end
end
if #variants > 0 then
table.sort(variants)
additional = ("This category also includes terms %sed with %s."):format(affixtype,
require("Module:table").serialCommaJoin(variants))
end
end
end
end
if data.lang then
local what_categorizes_msg = what_categorizes[affixtype]
if not what_categorizes_msg then
error(("Internal error: No what_categorizes value for affixtype '%s' for label '%s', lang '%s'"):
format(affixtype, data.label, data.lang:getCode()))
end
what_categorizes_msg = "මෙම ප්රවර්ගය තුළට යෙදුම් එක් කිරීම " .. (what_categorizes_msg
:gsub("{LANG}", data.lang:getCode())
:gsub("{TERM_AND_ID}", require("Module:string utilities").replacement_escape(
id and ("%s<id:%s>"):format(term, id) or term))
:gsub("{POS}", require("Module:string utilities").replacement_escape(
pos == "terms" and "" or ("|pos=%s"):format(pos)))
:gsub("{BASE}", "<var>base</var>")
:gsub("{BASE2}", "<var>base2</var>")
:gsub("{BASE_EXPL}", "<code><var>base</var></code> යනු යෙදුම ව්යුත්පන්න වී ඇති මූලික ලෙමාව වෙයි")
:gsub("{BASE_BASE2_EXPL}", "<code><var>base</var></code> and <code><var>base2</var></code> are the " ..
"යෙදුම ව්යුත්පන්න වී ඇති මූලික ලෙමා")
) .. "."
if additional then
additional = additional .. "\n\n" .. what_categorizes_msg
else
additional = what_categorizes_msg
end
end
return {
description = "{{{langname}}} " .. m_sinhala.sinhala(pos) .. " " .. (desc[affixtype]):format(require("Module:links").full_link({
lang = data.lang, term = display_term, alt = display_alt, sc = sc, id = id, tr = display_tr, ts = display_ts}, "term")) .. ".",
additional = additional,
breadcrumb = pos == "terms" and m_script_utilities.tag_text(display_alt or display_term, data.lang, sc, "term") .. id_text or pos,
displaytitle = "{{{langname}}} " .. (labelpref):format(m_script_utilities.tag_text(term, data.lang, sc, "term")) .. id_text,
parents = parents,
umbrella = false,
}, true -- true = args handled
end
end)
return {LABELS = labels, RAW_CATEGORIES = raw_categories, HANDLERS = handlers}
e3s10nys1uhggu4sghrtt0akgrbe663
සැකිල්ල:Han KangXi link
10
14480
193361
43974
2024-11-21T07:46:24Z
Lee
19
193361
wikitext
text/x-wiki
<includeonly>[https://www.kangxizidian.com/kangxi/{{#ifexpr:{{{1}}}<1000|0}}{{#ifexpr:{{{1}}}<100|0}}{{{1}}}.gif පිටුව {{{1}}}]</includeonly><noinclude>{{documentation}}</noinclude>
06m00pzirwvawtz4f2i5o0yqmxnrb0h
Module:template parser
828
15136
193446
183624
2024-11-19T16:41:21Z
en>Theknightwho
0
Discard unnecessary data loaders.
193446
Scribunto
text/plain
--[[
NOTE: This module works by using recursive backtracking to build a node tree, which can then be traversed as necessary.
Because it is called by a number of high-use modules, it has been optimised for speed using a profiler, since it is used to scrape data from large numbers of pages very quickly. To that end, it rolls some of its own methods in cases where this is faster than using a function from one of the standard libraries. Please DO NOT "simplify" the code by removing these, since you are almost guaranteed to slow things down, which could seriously impact performance on pages which call this module hundreds or thousands of times.
It has also been designed to emulate the native parser's behaviour as much as possible, which in some cases means replicating bugs or unintuitive behaviours in that code; these should not be "fixed", since it is important that the outputs are the same. Most of these originate from deficient regular expressions, which can't be used here, so the bugs have to be manually reintroduced as special cases (e.g. onlyinclude tags being case-sensitive and whitespace intolerant, unlike all other tags). If any of these are fixed, this module should also be updated accordingly.
]]
local export = {}
local data_module = "Module:template parser/data"
local magic_words_data_module = "Module:data/magic words"
local pages_module = "Module:pages"
local parser_extension_tags_data_module = "Module:data/parser extension tags"
local parser_module = "Module:parser"
local string_utilities_module = "Module:string utilities"
local table_module = "Module:table"
local require = require
local m_parser = require(parser_module)
local mw = mw
local mw_title = mw.title
local mw_uri = mw.uri
local string = string
local table = table
local anchor_encode = mw_uri.anchorEncode
local build_template -- defined as export.buildTemplate below
local class_else_type = m_parser.class_else_type
local concat = table.concat
local encode_uri = mw_uri.encode
local find = string.find
local format = string.format
local gsub = string.gsub
local html_create = mw.html.create
local insert = table.insert
local is_node = m_parser.is_node
local load_data = mw.loadData
local lower = string.lower
local make_title = mw_title.makeTitle -- unconditionally adds the specified namespace prefix
local match = string.match
local new_title = mw_title.new -- specified namespace prefix is only added if the input doesn't contain one
local next = next
local pairs = pairs
local parse -- defined as export.parse below
local parse_template_name -- defined below
local pcall = pcall
local rep = string.rep
local reverse = string.reverse
local select = select
local sub = string.sub
local title_equals = mw_title.equals
local tostring = m_parser.tostring
local type = type
local umatch = mw.ustring.match
--[==[
Loaders for functions in other modules, which overwrite themselves with the target function when called. This ensures modules are only loaded when needed, retains the speed/convenience of locally-declared pre-loaded functions, and has no overhead after the first call, since the target functions are called directly in any subsequent calls.]==]
local function decode_entities(...)
decode_entities = require(string_utilities_module).decode_entities
return decode_entities(...)
end
local function encode_entities(...)
encode_entities = require(string_utilities_module).encode_entities
return encode_entities(...)
end
local function is_valid_title(...)
is_valid_title = require(pages_module).is_valid_title
return is_valid_title(...)
end
local function pattern_escape(...)
pattern_escape = require(string_utilities_module).pattern_escape
return pattern_escape(...)
end
local function php_trim(...)
php_trim = require(string_utilities_module).php_trim
return php_trim(...)
end
local function replacement_escape(...)
replacement_escape = require(string_utilities_module).replacement_escape
return replacement_escape(...)
end
local function scribunto_param_key(...)
scribunto_param_key = require(string_utilities_module).scribunto_param_key
return scribunto_param_key(...)
end
local function sorted_pairs(...)
sorted_pairs = require(table_module).sortedPairs
return sorted_pairs(...)
end
local function split(...)
split = require(string_utilities_module).split
return split(...)
end
local function table_len(...)
table_len = require(table_module).length
return table_len(...)
end
local function uupper(...)
uupper = require(string_utilities_module).upper
return uupper(...)
end
--[==[
Loaders for objects, which load data (or some other object) into some variable, which can then be accessed as "foo or get_foo()", where the function get_foo sets the object to "foo" and then returns it. This ensures they are only loaded when needed, and avoids the need to check for the existence of the object each time, since once "foo" has been set, "get_foo" will not be called again.]==]
local data
local function get_data()
data, get_data = load_data(data_module), nil
return data
end
local frame
local function get_frame()
frame, get_frame = mw.getCurrentFrame(), nil
return frame
end
local magic_words
local function get_magic_words()
magic_words, get_magic_words = load_data(magic_words_data_module), nil
return magic_words
end
local parser_extension_tags
local function get_parser_extension_tags()
parser_extension_tags, get_parser_extension_tags = load_data(parser_extension_tags_data_module), nil
return parser_extension_tags
end
local Parser, Node = m_parser.new()
------------------------------------------------------------------------------------
--
-- Nodes
--
------------------------------------------------------------------------------------
Node.keys_to_remove = {"handler", "head", "pattern", "route", "step"}
local function expand(obj, frame_args)
return is_node(obj) and obj:expand(frame_args) or obj
end
export.expand = expand
function Node:expand(frame_args)
local output = {}
for i = 1, #self do
output[i] = expand(self[i], frame_args)
end
return concat(output)
end
local Wikitext = Node:new_class("wikitext")
-- force_node ensures the output will always be a node.
function Wikitext:new(this, force_node)
if type(this) ~= "table" then
return force_node and Node.new(self, {this}) or this
elseif #this == 1 then
local this1 = this[1]
return force_node and not is_node(this1) and Node.new(self, this) or this1
end
local success, str = pcall(concat, this)
if success then
return force_node and Node.new(self, {str}) or str
end
return Node.new(self, this)
end
-- First value is the parameter name.
-- Second value is the parameter's default value.
-- Any additional values are ignored: e.g. "{{{a|b|c}}}" is parameter "a" with default value "b" (*not* "b|c").
local Parameter = Node:new_class("parameter")
function Parameter:new(this)
local this2 = this[2]
if class_else_type(this2) == "argument" then
insert(this2, 2, "=")
this2 = Wikitext:new(this2)
end
return Node.new(self, {this[1], this2})
end
function Parameter:__tostring()
local output = {}
for i = 1, #self do
output[i] = tostring(self[i])
end
return "{{{" .. concat(output, "|") .. "}}}"
end
function Parameter:next(i)
i = i + 1
if i <= 2 then
return self[i], self, i
end
end
function Parameter:get_name(frame_args)
return scribunto_param_key(expand(self[1], frame_args))
end
function Parameter:get_default(frame_args)
local default = self[2]
if default ~= nil then
return expand(default, frame_args)
end
return "{{{" .. expand(self[1], frame_args) .. "}}}"
end
function Parameter:expand(frame_args)
if frame_args == nil then
return self:get_default()
end
local name = expand(self[1], frame_args)
local val = frame_args[scribunto_param_key(name)] -- Parameter in use.
if val ~= nil then
return val
end
val = self[2] -- Default.
if val ~= nil then
return expand(val, frame_args)
end
return "{{{" .. name .. "}}}"
end
local Argument = Node:new_class("argument")
function Argument:__tostring()
return tostring(self[1]) .. "=" .. tostring(self[2])
end
function Argument:expand(frame_args)
return expand(self[1], frame_args) .. "=" .. expand(self[2], frame_args)
end
local Template = Node:new_class("template")
function Template:__tostring()
local output = {}
for i = 1, #self do
output[i] = tostring(self[i])
end
return "{{" .. concat(output, "|") .. "}}"
end
-- Normalize the template name, check it's a valid template, then memoize results (using false for invalid titles).
-- Parser functions (e.g. {{#IF:a|b|c}}) need to have the first argument extracted from the title, as it comes after the colon. Because of this, the parser function and first argument are memoized as a table.
-- FIXME: Some parser functions have special argument handling (e.g. {{#SWITCH:}}).
do
local page_title = mw_title.getCurrentTitle()
local namespace_has_subpages = mw.site.namespaces[page_title.namespace].hasSubpages
local raw_pagename = page_title.fullText
local templates = {}
local parser_variables = {}
local parser_functions = {}
local function retrieve_magic_word_data(chunk)
local mgw_data = (magic_words or get_magic_words())[chunk]
if mgw_data then
return mgw_data
end
local normalized = uupper(chunk)
mgw_data = magic_words[normalized]
if mgw_data and not mgw_data.case_sensitive then
return mgw_data
end
end
-- Returns the name required to transclude the title object `title` using
-- template {{ }} syntax.
local function get_template_invocation_name(title)
if not is_valid_title(title) then
error("Template invocations require a valid page title, which cannot contain an interwiki prefix.")
end
local namespace = title.namespace
-- If not in the template namespace, include the prefix (or ":" if
-- mainspace).
if namespace ~= 10 then
return namespace == 0 and ":" .. title.text or title.prefixedText
end
-- If in the template namespace and it shares a name with a magic word,
-- it needs the prefix "Template:".
local text = title.text
local colon = find(text, ":", 1, true)
if not colon then
local mgw_data = retrieve_magic_word_data(text)
return mgw_data and mgw_data.parser_variable and title.prefixedText or text
end
local mgw_data = retrieve_magic_word_data(sub(text, 1, colon - 1))
if mgw_data and (mgw_data.parser_function or mgw_data.transclusion_modifier) then
return title.prefixedText
end
-- Also if "Template:" is necessary for disambiguation (e.g.
-- "Template:Category:Foo" can't be abbreviated to "Category:Foo").
local check = new_title(text, 10)
return check and title_equals(title, check) and text or title.prefixedText
end
export.getTemplateInvocationName = get_template_invocation_name
-- Returns whether a title is a redirect or not. Structured like this to
-- allow the use of pcall, since it will throw an error if the expensive
-- parser function limit has been reached.
local function is_redirect(title)
return title.isRedirect
end
function parse_template_name(name, has_args, fragment, force_transclusion)
local chunks, colon, start, n, p = {}, find(name, ":", 1, true), 1, 0, 0
while colon do
-- Pattern applies PHP ltrim.
local mgw_data = retrieve_magic_word_data(match(sub(name, start, colon - 1), "[^%z\t-\v\r ].*") or "")
if not mgw_data then
break
end
local priority = mgw_data.priority
if not (priority and priority > p) then
local pf = mgw_data.parser_function and mgw_data.name or nil
if pf then
n = n + 1
chunks[n] = pf .. ":"
return chunks, "parser function", sub(name, colon + 1)
end
break
end
n = n + 1
chunks[n] = mgw_data.name .. ":"
start, p = colon + 1, priority
colon = find(name, ":", start, true)
end
if start > 1 then
name = sub(name, start)
end
name = php_trim(name)
-- Parser variables can only take SUBST:/SAFESUBST: as modifiers.
if not has_args and p <= 1 then
local mgw_data = retrieve_magic_word_data(name)
local pv = mgw_data and mgw_data.parser_variable and mgw_data.name or nil
if pv then
n = n + 1
chunks[n] = pv
return chunks, "parser variable"
end
end
-- Handle relative template names.
if namespace_has_subpages then
-- If the name starts with "/", it's treated as a subpage of the
-- current page. Final slashes are trimmed, but this can't affect
-- the intervening slash (e.g. {{///}} refers to "{{PAGENAME}}/").
local initial = sub(name, 1, 1)
if initial == "/" then
name = raw_pagename .. (match(name, "^/.*[^/]") or "/")
-- If it starts with "../", trim it and any that follow, and go up
-- that many subpage levels. Then, treat any additional text as
-- a subpage of that page; final slashes are trimmed.
elseif initial == "." and sub(name, 2, 3) == "./" then
local n = 4
while sub(name, n, n + 2) == "../" do
n = n + 3
end
-- Retain an initial "/".
name = sub(name, n - 1)
-- Trim the relevant number of subpages from the pagename.
local pagename, i = reverse(raw_pagename), 0
for _ = 1, (n - 1) / 3 do
i = find(pagename, "/", i + 1, true)
-- Fail if there aren't enough slashes.
if not i then
return nil
end
end
-- Add the subpage text; since the intervening "/" is retained
-- in `name`, it can be trimmed along with any other final
-- slashes (e.g. {{..///}} refers to "{{BASEPAGENAME}}".)
name = reverse(sub(pagename, i + 1)) .. (match(name, "^.*[^/]") or "")
end
end
local title = new_title(name, 10)
if not is_valid_title(title) then
return nil
end
-- If `fragment` is set, save the original title's fragment, since it
-- won't carry through to any redirect targets.
if fragment then
fragment = title.fragment
end
-- Resolve any redirects. Note that is_valid_title treats interwiki
-- titles as invalid, which is correct in this case: if the redirect
-- target is an interwiki link, the template won't fail, but the
-- redirect does not get resolved (i.e. the redirect page itself gets
-- transcluded, so the template name should not be normalized to the
-- target). It also treats titles that only have fragments as invalid
-- (e.g. "#foo"), but these can't be used as redirects anyway.
-- title.redirectTarget increments the expensive parser function count,
-- but avoids extraneous transclusions polluting template lists and the
-- performance hit caused by indiscriminately grabbing redirectTarget.
-- However, if the expensive parser function limit has already been hit,
-- redirectTarget is used as a fallback. force_transclusion forces the
-- use of the fallback.
local redirect = true
if not force_transclusion then
local success, resolved = pcall(is_redirect, title)
if success and not resolved then
redirect = false
end
end
if redirect then
redirect = title.redirectTarget
if is_valid_title(redirect) then
title = redirect
end
end
local chunk = get_template_invocation_name(title)
-- Set the fragment (if applicable).
if fragment then
chunk = chunk .. "#" .. fragment
end
chunks[n + 1] = chunk
return chunks, "template"
end
-- Note: force_transclusion avoids incrementing the expensive parser
-- function count by forcing transclusion instead. This should only be used
-- when there is a real risk that the expensive parser function limit of
-- 500 will be hit.
local function process_name(self, frame_args, force_transclusion)
local name = expand(self[1], frame_args)
local has_args, norm = #self > 1
if not has_args then
norm = parser_variables[name]
if norm then
return norm, "parser variable"
end
end
norm = templates[name]
if norm then
local pf_arg1 = parser_functions[name]
return norm, pf_arg1 and "parser function" or "template", pf_arg1
elseif norm == false then
return nil
end
local chunks, subclass, pf_arg1 = parse_template_name(name, has_args, nil, force_transclusion)
-- Fail if invalid.
if not chunks then
templates[name] = false
return nil
end
local chunk1 = chunks[1]
-- Fail on SUBST:.
if chunk1 == "SUBST:" then
templates[name] = false
return nil
-- Any modifiers are ignored.
elseif subclass == "parser function" then
local pf = chunks[#chunks]
templates[name] = pf
parser_functions[name] = pf_arg1
return pf, "parser function", pf_arg1
end
-- Ignore SAFESUBST:, and treat MSGNW: as a parser function with the pagename as its first argument (ignoring any RAW: that comes after).
if chunks[chunk1 == "SAFESUBST:" and 2 or 1] == "MSGNW:" then
pf_arg1 = chunks[#chunks]
local pf = "MSGNW:"
templates[name] = pf
parser_functions[name] = pf_arg1
return pf, "parser function", pf_arg1
end
-- Ignore any remaining modifiers, as they've done their job.
local output = chunks[#chunks]
if subclass == "parser variable" then
parser_variables[name] = output
else
templates[name] = output
end
return output, subclass
end
function Template:get_name(frame_args, force_transclusion)
-- Only return the first return value.
return (process_name(self, frame_args, force_transclusion))
end
function Template:get_arguments(frame_args)
local name, subclass, pf_arg1 = process_name(self, frame_args)
if name == nil then
return nil
elseif subclass == "parser variable" then
return {}
end
local template_args = {}
if subclass == "parser function" then
template_args[1] = pf_arg1
for i = 2, #self do
template_args[i] = expand(self[i], frame_args) -- Not trimmed.
end
return template_args
end
local implicit = 0
for i = 2, #self do
local arg = self[i]
if class_else_type(arg) == "argument" then
template_args[scribunto_param_key(expand(arg[1], frame_args))] = php_trim(expand(arg[2], frame_args))
else
implicit = implicit + 1
template_args[implicit] = expand(arg, frame_args) -- Not trimmed.
end
end
return template_args
end
end
-- BIG TODO: manual template expansion.
function Template:expand()
return (frame or get_frame()):preprocess(tostring(self))
end
local Tag = Node:new_class("tag")
do
local php_htmlspecialchars_data
local function get_php_htmlspecialchars_data()
php_htmlspecialchars_data, get_php_htmlspecialchars_data = (data or get_data()).php_htmlspecialchars, nil
return php_htmlspecialchars_data
end
local function php_htmlspecialchars(str, compat)
return (gsub(str, compat and "[&\"<>]" or "[&\"'<>]", php_htmlspecialchars_data or get_php_htmlspecialchars_data()))
end
function Tag:__tostring()
local open_tag, attributes, n = {"<", self.name}, self:get_attributes(), 2
for attr, value in next, attributes do
n = n + 1
open_tag[n] = " " .. php_htmlspecialchars(attr) .. "=\"" .. php_htmlspecialchars(value, true) .. "\""
end
if self.self_closing then
return concat(open_tag) .. "/>"
end
return concat(open_tag) .. ">" .. concat(self) .. "</" .. self.name .. ">"
end
local valid_attribute_name
local function get_valid_attribute_name()
valid_attribute_name, get_valid_attribute_name = (data or get_data()).valid_attribute_name, nil
return valid_attribute_name
end
function Tag:get_attributes()
local raw = self.attributes
if not raw then
self.attributes = {}
return self.attributes
elseif type(raw) == "table" then
return raw
end
if sub(raw, -1) == "/" then
raw = sub(raw, 1, -2)
end
local attributes, head = {}, 1
-- Semi-manual implementation of the native regex.
while true do
local name, loc = match(raw, "([^\t\n\f\r />][^\t\n\f\r /=>]*)()", head)
if not name then
break
end
head = loc
local value
loc = match(raw, "^[\t\n\f\r ]*=[\t\n\f\r ]*()", head)
if loc then
head = loc
-- Either "", '' or the value ends on a space/at the end. Missing
-- end quotes are repaired by closing the value at the end.
value, loc = match(raw, "^\"([^\"]*)\"?()", head)
if not value then
value, loc = match(raw, "^'([^']*)'?()", head)
if not value then
value, loc = match(raw, "^([^\t\n\f\r ]*)()", head)
end
end
head = loc
end
-- valid_attribute_name is a pattern matching a valid attribute name.
-- Defined in the data due to its length - see there for more info.
if umatch(name, valid_attribute_name or get_valid_attribute_name()) then
-- Sanitizer applies PHP strtolower (ASCII-only).
attributes[lower(name)] = value and decode_entities(
php_trim((gsub(value, "[\t\n\r ]+", " ")))
) or ""
end
end
self.attributes = attributes
return attributes
end
end
function Tag:expand()
return (frame or get_frame()):preprocess(tostring(self))
end
local Heading = Node:new_class("heading")
function Heading:new(this)
if #this > 1 then
local success, str = pcall(concat, this)
if success then
return Node.new(self, {
str,
level = this.level,
section = this.section,
index = this.index
})
end
end
return Node.new(self, this)
end
function Heading:__tostring()
local eq = rep("=", self.level)
return eq .. Node.__tostring(self) .. eq
end
do
local expand_node = Node.expand
-- Expanded heading names can contain "\n" (e.g. inside nowiki tags), which
-- causes any heading containing them to fail. However, in such cases, the
-- native parser still treats it as a heading for the purpose of section
-- numbers.
local function validate_name(self, frame_args)
local name = expand_node(self, frame_args)
if find(name, "\n", 1, true) then
return nil
end
return name
end
function Heading:get_name(frame_args)
local name = validate_name(self, frame_args)
return name ~= nil and php_trim(name) or nil
end
-- FIXME: account for anchor disambiguation.
function Heading:get_anchor(frame_args)
local name = validate_name(self, frame_args)
return name ~= nil and decode_entities(anchor_encode(name)) or nil
end
function Heading:expand(frame_args)
local eq = rep("=", self.level)
return eq .. expand_node(self, frame_args) .. eq
end
end
------------------------------------------------------------------------------------
--
-- Parser
--
------------------------------------------------------------------------------------
function Parser:read(i, j)
local head, i = self.head, i or 0
return sub(self.text, head + i, head + (j or i))
end
function Parser:advance(n)
self.head = self.head + (n or self[-1].step or 1)
end
function Parser:jump(head)
self.head = head
self[-1].nxt = nil
end
function Parser:set_pattern(pattern)
local layer = self[-1]
layer.pattern = pattern
layer.nxt = nil
end
function Parser:consume()
local layer = self[-1]
local this = layer.nxt
if this then
layer.nxt = nil
else
local text, head = self.text, self.head
local loc1, loc2 = find(text, layer.pattern, head)
if loc1 == head or not loc1 then
this = sub(text, head, loc2)
else
this = sub(text, head, loc1 - 1)
layer.nxt = sub(text, loc1, loc2)
end
end
layer.step = #this
return layer.handler(self, this)
end
-- Template or parameter.
-- Parsed by matching the opening braces innermost-to-outermost (ignoring lone closing braces). Parameters {{{ }}} take priority over templates {{ }} where possible, but a double closing brace will always result in a closure, even if there are 3+ opening braces.
-- For example, "{{{{foo}}}}" (4) is parsed as a parameter enclosed by single braces, and "{{{{{foo}}}}}" (5) is a parameter inside a template. However, "{{{{{foo }} }}}" is a template inside a parameter, due to "}}" forcing the closure of the inner node.
do
-- Handlers.
local handle_name
local handle_argument
local function do_template_or_parameter(self, inner_node)
self:push_sublayer(handle_name)
self:set_pattern("[\n<[{|}]")
-- If a node has already been parsed, nest it at the start of the new
-- outer node (e.g. when parsing"{{{{foo}}bar}}", the template "{{foo}}"
-- is parsed first, since it's the innermost, and becomes the first
-- node of the outer template.
if inner_node then
self:emit(inner_node)
end
end
function handle_name(self, ...)
handle_name = self:switch(handle_name, {
["\n"] = Parser.heading_block,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
["|"] = function(self)
self:emit(Wikitext:new(self:pop_sublayer()))
self:push_sublayer(handle_argument)
self:set_pattern("[\n<=[{|}]")
end,
["}"] = function(self)
if self:read(1) == "}" then
self:emit(Wikitext:new(self:pop_sublayer()))
return self:pop()
end
self:emit("}")
end,
[""] = Parser.fail_route,
[false] = Parser.emit
})
return handle_name(self, ...)
end
function handle_argument(self, ...)
local function emit_argument(self)
local arg = Wikitext:new(self:pop_sublayer())
local layer = self[-1]
local key = layer.key
if key then
arg = Argument:new{key, arg}
layer.key = nil
end
self:emit(arg)
end
handle_argument = self:switch(handle_argument, {
["\n"] = function(self)
return self:heading_block("\n", self[-1].key and "=" or "==")
end,
["<"] = Parser.tag,
["="] = function(self)
local key = Wikitext:new(self:pop_sublayer())
self[-1].key = key
self:push_sublayer(handle_argument)
self:set_pattern("[\n<[{|}]")
end,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
["|"] = function(self)
emit_argument(self)
self:push_sublayer(handle_argument)
self:set_pattern("[\n<=[{|}]")
end,
["}"] = function(self)
if self:read(1) == "}" then
emit_argument(self)
return self:pop()
end
self:emit("}")
end,
[""] = Parser.fail_route,
[false] = Parser.emit
})
return handle_argument(self, ...)
end
function Parser:template_or_parameter()
local text, head, node_to_emit, failed = self.text, self.head
-- Comments/tags interrupt the brace count.
local braces = match(text, "^{+()", head) - head
self:advance(braces)
while true do
local success, node = self:try(do_template_or_parameter, node_to_emit)
-- Fail means no "}}" or "}}}" was found, so emit any remaining
-- unmatched opening braces before any templates/parameters that
-- were found.
if not success then
self:emit(rep("{", braces))
failed = true
break
-- If there are 3+ opening and closing braces, it's a parameter.
elseif braces >= 3 and self:read(2) == "}" then
self:advance(3)
braces = braces - 3
node = Parameter:new(node)
-- Otherwise, it's a template.
else
self:advance(2)
braces = braces - 2
node = Template:new(node)
end
local index = head + braces
node.index = index
node.raw = sub(text, index, self.head - 1)
node_to_emit = node
-- Terminate once not enough braces remain for further matches.
if braces == 0 then
break
-- Emit any stray opening brace before any matched nodes.
elseif braces == 1 then
self:emit("{")
break
end
end
if node_to_emit then
self:emit(node_to_emit)
end
return braces, failed
end
end
-- Tag.
do
local end_tags
local function get_end_tags()
end_tags, get_end_tags = (data or get_data()).end_tags, nil
return end_tags
end
-- Handlers.
local handle_start
local handle_tag
local function do_tag(self)
local layer = self[-1]
layer.handler, layer.index = handle_start, self.head
self:set_pattern("[%s/>]")
self:advance()
end
local function is_ignored_tag(self, this)
if self.transcluded then
return this == "includeonly"
end
return this == "noinclude" or this == "onlyinclude"
end
local function ignored_tag(self, text, head)
local loc = find(text, ">", head, true)
if not loc then
return self:fail_route()
end
self:jump(loc)
local tag = self:pop()
tag.ignored = true
return tag
end
function handle_start(self, this)
if this == "/" then
local text, head = self.text, self.head + 1
local this = match(text, "^[^%s/>]+", head)
if this and is_ignored_tag(self, lower(this)) then
head = head + #this
if not match(text, "^/[^>]", head) then
return ignored_tag(self, text, head)
end
end
return self:fail_route()
elseif this == "" then
return self:fail_route()
end
-- Tags are only case-insensitive with ASCII characters.
local raw_name = this
this = lower(this)
local end_tag_pattern = (end_tags or get_end_tags())[this]
if not end_tag_pattern then -- Validity check.
return self:fail_route()
end
local layer = self[-1]
local text, head = self.text, self.head + layer.step
if match(text, "^/[^>]", head) then
return self:fail_route()
elseif is_ignored_tag(self, this) then
return ignored_tag(self, text, head)
-- If an onlyinclude tag is not ignored (and cannot be active since it
-- would have triggered special handling earlier), it must be plaintext.
elseif this == "onlyinclude" then
return self:fail_route()
elseif this == "noinclude" or this == "includeonly" then
layer.ignored = true -- Ignored block.
layer.raw_name = raw_name
end
layer.name, layer.handler, layer.end_tag_pattern = this, handle_tag, end_tag_pattern
self:set_pattern(">")
end
function handle_tag(self, this)
if this == "" then
return self:fail_route()
elseif this ~= ">" then
self[-1].attributes = this
return
elseif self:read(-1) == "/" then
self[-1].self_closing = true
return self:pop()
end
local text, head, layer = self.text, self.head + 1, self[-1]
local loc1, loc2 = find(text, layer.end_tag_pattern, head)
if loc1 then
if loc1 > head then
self:emit(sub(text, head, loc1 - 1))
end
self:jump(loc2)
return self:pop()
-- noinclude and includeonly will tolerate having no closing tag, but
-- only if given in lowercase. This is due to a preprocessor bug, as
-- it uses a regex with the /i (case-insensitive) flag to check for
-- end tags, but a simple array lookup with lowercase tag names when
-- looking up which tags should tolerate no closing tag (exact match
-- only, so case-sensitive).
elseif layer.ignored then
local raw_name = layer.raw_name
if raw_name == "noinclude" or raw_name == "includeonly" then
self:jump(#text)
return self:pop()
end
end
return self:fail_route()
end
function Parser:tag()
-- HTML comment.
if self:read(1, 3) == "!--" then
local text = self.text
self:jump(select(2, find(text, "-->", self.head + 4, true)) or #text)
-- onlyinclude tags (which must be lowercase with no whitespace).
elseif self.onlyinclude and self:read(1, 13) == "/onlyinclude>" then
local text = self.text
self:jump(select(2, find(text, "<onlyinclude>", self.head + 14, true)) or #text)
else
local success, tag = self:try(do_tag)
if not success then
self:emit("<")
elseif not tag.ignored then
tag.end_tag_pattern = nil
self:emit(Tag:new(tag))
end
end
end
end
-- Heading.
-- The preparser assigns each heading a number, which is used for things like section edit links. The preparser will only do this for heading blocks which aren't nested inside templates, parameters and parser tags. In some cases (e.g. when template blocks contain untrimmed newlines), a preparsed heading may not be treated as a heading in the final output. That does not affect the preparser, however, which will always count sections based on the preparser heading count, since it can't know what a template's final output will be.
do
-- Handlers.
local handle_start
local handle_body
local handle_possible_end
local function do_heading(self)
local layer, head = self[-1], self.head
layer.handler, layer.index = handle_start, head
self:set_pattern("[\t\n ]")
-- Comments/tags interrupt the equals count.
local eq = match(self.text, "^=+()", head) - head
layer.level = eq
self:advance(eq)
end
local function do_heading_possible_end(self)
local layer = self[-1]
layer.handler = handle_possible_end
self:set_pattern("[\n<]")
end
function handle_start(self, ...)
-- ===== is "=" as an L2; ======== is "==" as an L3 etc.
local function newline(self)
local layer = self[-1]
local eq = layer.level
if eq <= 2 then
return self:fail_route()
end
-- Calculate which equals signs determine the heading level.
local level_eq = eq - (2 - eq % 2)
level_eq = level_eq > 12 and 12 or level_eq
-- Emit the excess.
self:emit(rep("=", eq - level_eq))
layer.level = level_eq / 2
return self:pop()
end
local function whitespace(self)
local success, possible_end = self:try(do_heading_possible_end)
if success then
self:emit(Wikitext:new(possible_end))
local layer = self[-1]
layer.handler = handle_body
self:set_pattern("[\n<=[{]")
return self:consume()
end
return newline(self)
end
handle_start = self:switch(handle_start, {
["\t"] = whitespace,
["\n"] = newline,
[" "] = whitespace,
[""] = newline,
[false] = function(self)
-- Emit any excess = signs once we know it's a conventional heading. Up till now, we couldn't know if the heading is just a string of = signs (e.g. ========), so it wasn't guaranteed that the heading text starts after the 6th.
local layer = self[-1]
local eq = layer.level
if eq > 6 then
self:emit(1, rep("=", eq - 6))
layer.level = 6
end
layer.handler = handle_body
self:set_pattern("[\n<=[{]")
return self:consume()
end
})
return handle_start(self, ...)
end
function handle_body(self, ...)
handle_body = self:switch(handle_body, {
["\n"] = Parser.fail_route,
["<"] = Parser.tag,
["="] = function(self)
-- Comments/tags interrupt the equals count.
local eq = match(self.text, "^=+", self.head)
local eq_len = #eq
self:advance(eq_len)
local success, possible_end = self:try(do_heading_possible_end)
if success then
self:emit(eq)
self:emit(Wikitext:new(possible_end))
return self:consume()
end
local layer = self[-1]
local level = layer.level
if eq_len > level then
self:emit(rep("=", eq_len - level))
elseif level > eq_len then
layer.level = eq_len
self:emit(1, rep("=", level - eq_len))
end
return self:pop()
end,
["["] = Parser.wikilink_block,
["{"] = function(self, this)
return self:braces("{", true)
end,
[""] = Parser.fail_route,
[false] = Parser.emit
})
return handle_body(self, ...)
end
function handle_possible_end(self, ...)
handle_possible_end = self:switch(handle_possible_end, {
["\n"] = Parser.fail_route,
["<"] = function(self)
if self:read(1, 3) ~= "!--" then
return self:pop()
end
local head = select(2, find(self.text, "-->", self.head + 4, true))
if not head then
return self:pop()
end
self:jump(head)
end,
[""] = Parser.fail_route,
[false] = function(self, this)
if not match(this, "^[\t ]+()$") then
return self:pop()
end
self:emit(this)
end
})
return handle_possible_end(self, ...)
end
function Parser:heading()
local success, heading = self:try(do_heading)
if success then
local section = self.section + 1
heading.section = section
self.section = section
self:emit(Heading:new(heading))
return self:consume()
else
self:emit("=")
end
end
end
------------------------------------------------------------------------------------
--
-- Block handlers
--
------------------------------------------------------------------------------------
-- Block handlers.
-- These are blocks which can affect template/parameter parsing, since they're also parsed by Parsoid at the same time (even though they aren't processed until later).
-- All blocks (including templates/parameters) can nest inside each other, but an inner block must be closed before the outer block which contains it. This is why, for example, the wikitext "{{template| [[ }}" will result in an unprocessed template, since the inner "[[" is treated as the opening of a wikilink block, which prevents "}}" from being treated as the closure of the template block. On the other hand, "{{template| [[ ]] }}" will process correctly, since the wikilink block is closed before the template closure. It makes no difference whether the block will be treated as valid or not when it's processed later on, so "{{template| [[ }} ]] }}" would also work, even though "[[ }} ]]" is not a valid wikilink.
-- Note that nesting also affects pipes and equals signs, in addition to block closures.
-- These blocks can be nested to any degree, so "{{template| [[ [[ [[ ]] }}" will not work, since only one of the three wikilink blocks has been closed. On the other hand, "{{template| [[ [[ [[ ]] ]] ]] }}" will work.
-- All blocks are implicitly closed by the end of the text, since their validity is irrelevant at this stage.
-- Language conversion block.
-- Opens with "-{" and closes with "}-". However, templates/parameters take priority, so "-{{" is parsed as "-" followed by the opening of a template/parameter block (depending on what comes after).
-- Note: Language conversion blocks aren't actually enabled on the English Wiktionary, but Parsoid still parses them at this stage, so they can affect the closure of outer blocks: e.g. "[[ -{ ]]" is not a valid wikilink block, since the "]]" falls inside the new language conversion block.
do
--Handler.
local handle_language_conversion_block
local function do_language_conversion_block(self)
local layer = self[-1]
layer.handler = handle_language_conversion_block
self:set_pattern("[\n<[{}]")
end
function handle_language_conversion_block(self, ...)
handle_language_conversion_block = self:switch(handle_language_conversion_block, {
["\n"] = Parser.heading_block,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
["}"] = function(self)
if self:read(1) == "-" then
self:emit("}-")
self:advance()
return self:pop()
end
self:emit("}")
end,
[""] = Parser.pop,
[false] = Parser.emit
})
return handle_language_conversion_block(self, ...)
end
function Parser:braces(this, fail_on_unclosed_braces)
local language_conversion_block = self:read(-1) == "-"
if self:read(1) == "{" then
local braces, failed = self:template_or_parameter()
-- Headings will fail if they contain an unclosed brace block.
if failed and fail_on_unclosed_braces then
return self:fail_route()
-- Language conversion blocks cannot begin "-{{", but can begin
-- "-{{{" iff parsed as "-{" + "{{".
elseif not (language_conversion_block and braces == 1) then
return self:consume()
end
else
self:emit(this)
if not language_conversion_block then
return
end
self:advance()
end
self:emit(Wikitext:new(self:get(do_language_conversion_block)))
end
end
--[==[
Headings
Opens with "\n=" (or "=" at the start of the text), and closes with "\n" or the end of the text. Note that it doesn't matter whether the heading will fail to process due to a premature newline (e.g. if there are no closing signs), so at this stage the only thing that matters for closure is the newline or end of text.
Note: Heading blocks are only parsed like this if they occur inside a template, since they do not iterate the preparser's heading count (i.e. they aren't proper headings).
Note 2: if directly inside a template argument with no previous equals signs, a newline followed by a single equals sign is parsed as an argument equals sign, not the opening of a new L1 heading block. This does not apply to any other heading levels. As such, {{template|key\n=}}, {{template|key\n=value}} or even {{template|\n=}} will successfully close, but {{template|key\n==}}, {{template|key=value\n=more value}}, {{template\n=}} etc. will not, since in the latter cases the "}}" would fall inside the new heading block.
]==]
do
--Handler.
local handle_heading_block
local function do_heading_block(self)
local layer = self[-1]
layer.handler = handle_heading_block
self:set_pattern("[\n<[{]")
end
function handle_heading_block(self, ...)
handle_heading_block = self:switch(handle_heading_block, {
["\n"] = function(self)
self:newline()
return self:pop()
end,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
[""] = Parser.pop,
[false] = Parser.emit
})
return handle_heading_block(self, ...)
end
function Parser:heading_block(this, nxt)
self:newline()
this = this .. (nxt or "=")
local loc = #this - 1
while self:read(0, loc) == this do
self:advance()
self:emit(Wikitext:new(self:get(do_heading_block)))
end
end
end
-- Wikilink block.
-- Opens with "[[" and closes with "]]".
do
-- Handler.
local handle_wikilink_block
local function do_wikilink_block(self)
local layer = self[-1]
layer.handler = handle_wikilink_block
self:set_pattern("[\n<[%]{]")
end
function handle_wikilink_block(self, ...)
handle_wikilink_block = self:switch(handle_wikilink_block, {
["\n"] = Parser.heading_block,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["]"] = function(self)
if self:read(1) == "]" then
self:emit("]]")
self:advance()
return self:pop()
end
self:emit("]")
end,
["{"] = Parser.braces,
[""] = Parser.pop,
[false] = Parser.emit
})
return handle_wikilink_block(self, ...)
end
function Parser:wikilink_block()
if self:read(1) == "[" then
self:emit("[[")
self:advance(2)
self:emit(Wikitext:new(self:get(do_wikilink_block)))
else
self:emit("[")
end
end
end
-- Lines which only contain comments, " " and "\t" are eaten, so long as
-- they're bookended by "\n" (i.e. not the first or last line).
function Parser:newline()
local text, head = self.text, self.head
while true do
repeat
local loc = match(text, "^[\t ]*<!%-%-()", head + 1)
if not loc then
break
end
loc = select(2, find(text, "-->", loc, true))
head = loc or head
until not loc
-- Fail if no comments found.
if head == self.head then
break
end
head = match(text, "^[\t ]*()\n", head + 1)
if not head then
break
end
self:jump(head)
end
self:emit("\n")
end
do
-- Handlers.
local handle_start
local main_handler
-- If `transcluded` is true, then the text is checked for a pair of
-- onlyinclude tags. If these are found (even if they're in the wrong
-- order), then the start of the page is treated as though it is preceded
-- by a closing onlyinclude tag.
-- Note 1: unlike other parser extension tags, onlyinclude tags are case-
-- sensitive and cannot contain whitespace.
-- Note 2: onlyinclude tags *can* be implicitly closed by the end of the
-- text, but the hard requirement above means this can only happen if
-- either the tags are in the wrong order or there are multiple onlyinclude
-- blocks.
local function do_parse(self, transcluded)
local layer = self[-1]
layer.handler = handle_start
self:set_pattern(".")
self.section = 0
if not transcluded then
return
end
self.transcluded = true
local text = self.text
if find(text, "</onlyinclude>", 1, true) then
local head = find(text, "<onlyinclude>", 1, true)
if head then
self.onlyinclude = true
self:jump(head + 13)
end
end
end
-- If the first character is "=", try parsing it as a heading.
function handle_start(self, this)
local layer = self[-1]
layer.handler = main_handler
self:set_pattern("[\n<{]")
if this == "=" then
return self:heading()
end
return self:consume()
end
function main_handler(self, ...)
main_handler = self:switch(main_handler, {
["\n"] = function(self)
self:newline()
if self:read(1) == "=" then
self:advance()
return self:heading()
end
end,
["<"] = Parser.tag,
["{"] = function(self)
if self:read(1) == "{" then
self:template_or_parameter()
return self:consume()
end
self:emit("{")
end,
[""] = Parser.pop,
[false] = Parser.emit
})
return main_handler(self, ...)
end
function export.parse(text, transcluded)
local text_type = type(text)
return (select(2, Parser:parse{
text = text_type == "string" and text or
text_type == "number" and tostring(text) or
error("bad argument #1 (string expected, got " .. text_type .. ")"),
node = {Wikitext, true},
route = {do_parse, transcluded}
}))
end
parse = export.parse
end
do
local function next_template(iter)
while true do
local node = iter()
if node == nil or class_else_type(node) == "template" then
return node
end
end
end
function export.find_templates(text, not_transcluded)
return next_template, parse(text, not not_transcluded):__pairs("next_node")
end
end
do
local link_parameter_1, link_parameter_2
local function get_link_parameter_1()
link_parameter_1, get_link_parameter_1 = (data or get_data()).template_link_param_1, nil
return link_parameter_1
end
local function get_link_parameter_2()
link_parameter_2, get_link_parameter_2 = (data or get_data()).template_link_param_2, nil
return link_parameter_2
end
-- Generate a link. If the target title doesn't have a fragment, use "#top"
-- (which is an implicit anchor at the top of every page), as this ensures
-- self-links still display as links, since bold display is distracting and
-- unintuitive for template links.
local function link_page(title, display)
local fragment = title.fragment
if fragment == "" then
fragment = "top"
end
return format(
"[[:%s|%s]]",
encode_uri(title.prefixedText .. "#" .. fragment, "WIKI"),
display
)
end
-- pf_arg1 or pf_arg2 may need to be linked if a given parser function
-- treats them as a pagename. If a key exists in `namespace`, the value is
-- the namespace for the page: if not 0, then the namespace prefix will
-- always be added to the input (e.g. {{#invoke:}} can only target the
-- Module: namespace, so inputting "Template:foo" gives
-- "Module:Template:foo", and "Module:foo" gives "Module:Module:foo").
-- However, this isn't possible with mainspace (namespace 0), so prefixes
-- are respected. make_title handles all of this automatically.
local function finalize_arg(pagename, namespace)
if namespace == nil then
return pagename
end
local title = make_title(namespace, pagename)
if not (title and is_valid_title(title)) then
return pagename
end
return link_page(title, pagename)
end
local function render_title(name, args)
-- parse_template_name returns a table of transclusion modifiers plus
-- the normalized template/magic word name, which will be used as link
-- targets. The third return value pf_arg1 is the first argument of a
-- a parser function, which comes after the colon (e.g. "foo" in
-- "{{#IF:foo|bar|baz}}"). This means args[1] (i.e. the first argument
-- that comes after a pipe is actually argument 2, and so on. Note: the
-- second parameter of parse_template_name checks if there are any
-- arguments, since parser variables cannot take arguments (e.g.
-- {{CURRENTYEAR}} is a parser variable, but {{CURRENTYEAR|foo}}
-- transcludes "Template:CURRENTYEAR"). In such cases, the returned
-- table explicitly includes the "Template:" prefix in the template
-- name. The third parameter instructs it to retain any fragment in the
-- template name in the returned table, if present.
local chunks, subclass, pf_arg1 = parse_template_name(
name,
args and pairs(args)(args) ~= nil,
true
)
if chunks == nil then
return name
end
local chunks_len = #chunks
-- Additionally, generate the corresponding table `rawchunks`, which
-- is a list of colon-separated chunks in the raw input. This is used
-- to retrieve the display forms for each chunk.
local rawchunks = split(name, ":")
for i = 1, chunks_len - 1 do
chunks[i] = format(
"[[%s|%s]]",
encode_uri((magic_words or get_magic_words())[sub(chunks[i], 1, -2)].transclusion_modifier, "WIKI"),
rawchunks[i]
)
end
local chunk = chunks[chunks_len]
-- If it's a template, return a link to it with link_page, concatenating
-- the remaining chunks in `rawchunks` to form the display text.
-- Use new_title with the default namespace 10 (Template:) to generate
-- a target title, which is the same setting used for retrieving
-- templates (including those in other namespaces, as prefixes override
-- the default).
if subclass == "template" then
chunks[chunks_len] = link_page(
new_title(chunk, 10),
concat(rawchunks, ":", chunks_len) -- :
)
return concat(chunks, ":") -- :
elseif subclass == "parser variable" then
chunks[chunks_len] = format(
"[[%s|%s]]",
encode_uri((magic_words or get_magic_words())[chunk].parser_variable, "WIKI"),
rawchunks[chunks_len]
)
return concat(chunks, ":") -- :
end
-- Otherwise, it must be a parser function.
local mgw_data = (magic_words or get_magic_words())[sub(chunk, 1, -2)]
local link = mgw_data.parser_function or mgw_data.transclusion_modifier
local pf_arg2 = args and args[1] or nil
-- Some magic words have different links, depending on whether argument
-- 2 is specified (e.g. "baz" in {{foo:bar|baz}}).
if type(link) == "table" then
link = pf_arg2 and link[2] or link[1]
end
chunks[chunks_len] = format(
"[[%s|%s]]",
encode_uri(link, "WIKI"),
rawchunks[chunks_len]
)
-- #TAG: has special handling, because documentation links for parser
-- extension tags come from [[Module:data/parser extension tags]].
if chunk == "#TAG:" then
-- Tags are only case-insensitive with ASCII characters.
local tag = (parser_extension_tags or get_parser_extension_tags())[lower(php_trim(pf_arg1))]
if tag then
pf_arg1 = format(
"[[%s|%s]]",
encode_uri(tag, "WIKI"),
pf_arg1
)
end
-- Otherwise, finalize pf_arg1 and add it to `chunks`.
else
pf_arg1 = finalize_arg(pf_arg1, (link_parameter_1 or get_link_parameter_1())[chunk])
end
chunks[chunks_len + 1] = pf_arg1
-- Finalize pf_arg2 (if applicable), then return.
if pf_arg2 then
args[1] = finalize_arg(pf_arg2, (link_parameter_2 or get_link_parameter_2())[chunk])
end
return concat(chunks, ":") -- :
end
function export.buildTemplate(title, args)
local output = {title}
-- Iterate over all numbered parameters in order, followed by any
-- remaining parameters in codepoint order. Implicit parameters are
-- used wherever possible, even if explicit numbers are interpolated
-- between them (e.g. 0 would go before any implicit parameters, and
-- 2.5 between 2 and 3).
-- TODO: handle "=" and "|" in params/values.
if args then
local iter, implicit = sorted_pairs(args), table_len(args)
local k, v = iter()
while k ~= nil do
if type(k) == "number" and k >= 1 and k <= implicit and k % 1 == 0 then
insert(output, v)
else
insert(output, k .. "=" .. v)
end
k, v = iter()
end
end
return output
end
build_template = export.buildTemplate
function export.templateLink(title, args, no_link)
local output = build_template(no_link and title or render_title(title, args), args)
for i = 1, #output do
output[i] = encode_entities(output[i], "={}", true, true)
end
return tostring(html_create("code")
:css("white-space", "pre-wrap")
:wikitext("{{" .. concat(output, "|") .. "}}") -- {{ | }}
)
end
end
do
local function next_parameter(iter)
while true do
local node = iter()
if node == nil or class_else_type(node) == "parameter" then
return node
end
end
end
function export.find_parameters(text, not_transcluded)
return next_parameter, parse(text, not not_transcluded):__pairs("next_node")
end
function export.displayParameter(name, default)
return tostring(html_create("code")
:css("white-space", "pre-wrap")
:wikitext("{{{" .. concat({name, default}, "|") .. "}}}") -- {{{ | }}}
)
end
end
do
local function check_level(level)
if type(level) ~= "number" then
error("Heading levels must be numbers.")
elseif level < 1 or level > 6 or level % 1 ~= 0 then
error("Heading levels must be integers between 1 and 6.")
end
return level
end
local function next_heading(iter)
while true do
local node = iter()
if node == nil then
return nil
elseif class_else_type(node) == "heading" then
local level = node.level
if level >= iter.i and level <= iter.j then
return node
end
end
end
end
-- FIXME: should headings which contain "\n" be returned? This may depend
-- on variable factors, like template expansion. They iterate the heading
-- count number, but fail on rendering. However, in some cases a different
-- heading might still be rendered due to intermediate equals signs; it
-- may even be of a different heading level: e.g., this is parsed as an
-- L2 heading with a newline (due to the wikilink block), but renders as the
-- L1 heading "=foo[[". Section edit links are sometimes (but not always)
-- present in such cases.
-- ==[[=
-- ]]==
-- TODO: section numbers for edit links seem to also include headings
-- nested inside templates and parameters (but apparently not those in
-- parser extension tags - need to test this more). If we ever want to add
-- section edit links manually, this will need to be accounted for.
function export.find_headings(text, i, j)
local iter = parse(text):__pairs("next_node")
iter.i, iter.j = i and check_level(i) or 1, j and check_level(j) or 6
return next_heading, iter
end
end
do
local function make_tag(tag)
return tostring(html_create("code")
:css("white-space", "pre-wrap")
:wikitext("<" .. tag .. ">")
)
end
-- Note: invalid tags are returned without links.
function export.wikitagLink(tag)
-- ">" can't appear in tags (including attributes) since the parser
-- unconditionally treats ">" as the end of a tag.
if find(tag, ">", 1, true) then
return make_tag(tag)
end
-- Tags must start "<tagname..." or "</tagname...", with no whitespace
-- after "<" or "</".
local slash, tagname, remainder = match(tag, "^(/?)([^/%s]+)(.*)$")
if not tagname then
return make_tag(tag)
end
-- Tags are only case-insensitive with ASCII characters.
local link = lower(tagname)
if (
-- onlyinclude tags must be lowercase and are whitespace intolerant.
link == "onlyinclude" and (link ~= tagname or remainder ~= "") or
-- Closing wikitags (except onlyinclude) can only have whitespace
-- after the tag name.
slash == "/" and not match(remainder, "^%s*()$") or
-- Tagnames cannot be followed immediately by "/", unless it comes
-- at the end (e.g. "<nowiki/>", but not "<nowiki/ >").
remainder ~= "/" and sub(remainder, 1, 1) == "/"
) then
-- Output with no link.
return make_tag(tag)
end
-- Partial transclusion tags aren't in the table of parser extension
-- tags.
if link == "noinclude" or link == "includeonly" or link == "onlyinclude" then
link = "mw:Transclusion#Partial transclusion"
else
link = (parser_extension_tags or get_parser_extension_tags())[link]
end
if link then
tag = gsub(tag, pattern_escape(tagname), "[[" .. replacement_escape(encode_uri(link, "WIKI")) .. "|%0]]", 1)
end
return make_tag(tag)
end
end
-- For convenience.
export.class_else_type = class_else_type
return export
im5kt08n2nmkvg31rbmxdtxlhku8ghm
193447
193446
2024-11-21T10:31:49Z
Lee
19
[[:en:Module:template_parser]] වෙතින් එක් සංශෝධනයක්
193446
Scribunto
text/plain
--[[
NOTE: This module works by using recursive backtracking to build a node tree, which can then be traversed as necessary.
Because it is called by a number of high-use modules, it has been optimised for speed using a profiler, since it is used to scrape data from large numbers of pages very quickly. To that end, it rolls some of its own methods in cases where this is faster than using a function from one of the standard libraries. Please DO NOT "simplify" the code by removing these, since you are almost guaranteed to slow things down, which could seriously impact performance on pages which call this module hundreds or thousands of times.
It has also been designed to emulate the native parser's behaviour as much as possible, which in some cases means replicating bugs or unintuitive behaviours in that code; these should not be "fixed", since it is important that the outputs are the same. Most of these originate from deficient regular expressions, which can't be used here, so the bugs have to be manually reintroduced as special cases (e.g. onlyinclude tags being case-sensitive and whitespace intolerant, unlike all other tags). If any of these are fixed, this module should also be updated accordingly.
]]
local export = {}
local data_module = "Module:template parser/data"
local magic_words_data_module = "Module:data/magic words"
local pages_module = "Module:pages"
local parser_extension_tags_data_module = "Module:data/parser extension tags"
local parser_module = "Module:parser"
local string_utilities_module = "Module:string utilities"
local table_module = "Module:table"
local require = require
local m_parser = require(parser_module)
local mw = mw
local mw_title = mw.title
local mw_uri = mw.uri
local string = string
local table = table
local anchor_encode = mw_uri.anchorEncode
local build_template -- defined as export.buildTemplate below
local class_else_type = m_parser.class_else_type
local concat = table.concat
local encode_uri = mw_uri.encode
local find = string.find
local format = string.format
local gsub = string.gsub
local html_create = mw.html.create
local insert = table.insert
local is_node = m_parser.is_node
local load_data = mw.loadData
local lower = string.lower
local make_title = mw_title.makeTitle -- unconditionally adds the specified namespace prefix
local match = string.match
local new_title = mw_title.new -- specified namespace prefix is only added if the input doesn't contain one
local next = next
local pairs = pairs
local parse -- defined as export.parse below
local parse_template_name -- defined below
local pcall = pcall
local rep = string.rep
local reverse = string.reverse
local select = select
local sub = string.sub
local title_equals = mw_title.equals
local tostring = m_parser.tostring
local type = type
local umatch = mw.ustring.match
--[==[
Loaders for functions in other modules, which overwrite themselves with the target function when called. This ensures modules are only loaded when needed, retains the speed/convenience of locally-declared pre-loaded functions, and has no overhead after the first call, since the target functions are called directly in any subsequent calls.]==]
local function decode_entities(...)
decode_entities = require(string_utilities_module).decode_entities
return decode_entities(...)
end
local function encode_entities(...)
encode_entities = require(string_utilities_module).encode_entities
return encode_entities(...)
end
local function is_valid_title(...)
is_valid_title = require(pages_module).is_valid_title
return is_valid_title(...)
end
local function pattern_escape(...)
pattern_escape = require(string_utilities_module).pattern_escape
return pattern_escape(...)
end
local function php_trim(...)
php_trim = require(string_utilities_module).php_trim
return php_trim(...)
end
local function replacement_escape(...)
replacement_escape = require(string_utilities_module).replacement_escape
return replacement_escape(...)
end
local function scribunto_param_key(...)
scribunto_param_key = require(string_utilities_module).scribunto_param_key
return scribunto_param_key(...)
end
local function sorted_pairs(...)
sorted_pairs = require(table_module).sortedPairs
return sorted_pairs(...)
end
local function split(...)
split = require(string_utilities_module).split
return split(...)
end
local function table_len(...)
table_len = require(table_module).length
return table_len(...)
end
local function uupper(...)
uupper = require(string_utilities_module).upper
return uupper(...)
end
--[==[
Loaders for objects, which load data (or some other object) into some variable, which can then be accessed as "foo or get_foo()", where the function get_foo sets the object to "foo" and then returns it. This ensures they are only loaded when needed, and avoids the need to check for the existence of the object each time, since once "foo" has been set, "get_foo" will not be called again.]==]
local data
local function get_data()
data, get_data = load_data(data_module), nil
return data
end
local frame
local function get_frame()
frame, get_frame = mw.getCurrentFrame(), nil
return frame
end
local magic_words
local function get_magic_words()
magic_words, get_magic_words = load_data(magic_words_data_module), nil
return magic_words
end
local parser_extension_tags
local function get_parser_extension_tags()
parser_extension_tags, get_parser_extension_tags = load_data(parser_extension_tags_data_module), nil
return parser_extension_tags
end
local Parser, Node = m_parser.new()
------------------------------------------------------------------------------------
--
-- Nodes
--
------------------------------------------------------------------------------------
Node.keys_to_remove = {"handler", "head", "pattern", "route", "step"}
local function expand(obj, frame_args)
return is_node(obj) and obj:expand(frame_args) or obj
end
export.expand = expand
function Node:expand(frame_args)
local output = {}
for i = 1, #self do
output[i] = expand(self[i], frame_args)
end
return concat(output)
end
local Wikitext = Node:new_class("wikitext")
-- force_node ensures the output will always be a node.
function Wikitext:new(this, force_node)
if type(this) ~= "table" then
return force_node and Node.new(self, {this}) or this
elseif #this == 1 then
local this1 = this[1]
return force_node and not is_node(this1) and Node.new(self, this) or this1
end
local success, str = pcall(concat, this)
if success then
return force_node and Node.new(self, {str}) or str
end
return Node.new(self, this)
end
-- First value is the parameter name.
-- Second value is the parameter's default value.
-- Any additional values are ignored: e.g. "{{{a|b|c}}}" is parameter "a" with default value "b" (*not* "b|c").
local Parameter = Node:new_class("parameter")
function Parameter:new(this)
local this2 = this[2]
if class_else_type(this2) == "argument" then
insert(this2, 2, "=")
this2 = Wikitext:new(this2)
end
return Node.new(self, {this[1], this2})
end
function Parameter:__tostring()
local output = {}
for i = 1, #self do
output[i] = tostring(self[i])
end
return "{{{" .. concat(output, "|") .. "}}}"
end
function Parameter:next(i)
i = i + 1
if i <= 2 then
return self[i], self, i
end
end
function Parameter:get_name(frame_args)
return scribunto_param_key(expand(self[1], frame_args))
end
function Parameter:get_default(frame_args)
local default = self[2]
if default ~= nil then
return expand(default, frame_args)
end
return "{{{" .. expand(self[1], frame_args) .. "}}}"
end
function Parameter:expand(frame_args)
if frame_args == nil then
return self:get_default()
end
local name = expand(self[1], frame_args)
local val = frame_args[scribunto_param_key(name)] -- Parameter in use.
if val ~= nil then
return val
end
val = self[2] -- Default.
if val ~= nil then
return expand(val, frame_args)
end
return "{{{" .. name .. "}}}"
end
local Argument = Node:new_class("argument")
function Argument:__tostring()
return tostring(self[1]) .. "=" .. tostring(self[2])
end
function Argument:expand(frame_args)
return expand(self[1], frame_args) .. "=" .. expand(self[2], frame_args)
end
local Template = Node:new_class("template")
function Template:__tostring()
local output = {}
for i = 1, #self do
output[i] = tostring(self[i])
end
return "{{" .. concat(output, "|") .. "}}"
end
-- Normalize the template name, check it's a valid template, then memoize results (using false for invalid titles).
-- Parser functions (e.g. {{#IF:a|b|c}}) need to have the first argument extracted from the title, as it comes after the colon. Because of this, the parser function and first argument are memoized as a table.
-- FIXME: Some parser functions have special argument handling (e.g. {{#SWITCH:}}).
do
local page_title = mw_title.getCurrentTitle()
local namespace_has_subpages = mw.site.namespaces[page_title.namespace].hasSubpages
local raw_pagename = page_title.fullText
local templates = {}
local parser_variables = {}
local parser_functions = {}
local function retrieve_magic_word_data(chunk)
local mgw_data = (magic_words or get_magic_words())[chunk]
if mgw_data then
return mgw_data
end
local normalized = uupper(chunk)
mgw_data = magic_words[normalized]
if mgw_data and not mgw_data.case_sensitive then
return mgw_data
end
end
-- Returns the name required to transclude the title object `title` using
-- template {{ }} syntax.
local function get_template_invocation_name(title)
if not is_valid_title(title) then
error("Template invocations require a valid page title, which cannot contain an interwiki prefix.")
end
local namespace = title.namespace
-- If not in the template namespace, include the prefix (or ":" if
-- mainspace).
if namespace ~= 10 then
return namespace == 0 and ":" .. title.text or title.prefixedText
end
-- If in the template namespace and it shares a name with a magic word,
-- it needs the prefix "Template:".
local text = title.text
local colon = find(text, ":", 1, true)
if not colon then
local mgw_data = retrieve_magic_word_data(text)
return mgw_data and mgw_data.parser_variable and title.prefixedText or text
end
local mgw_data = retrieve_magic_word_data(sub(text, 1, colon - 1))
if mgw_data and (mgw_data.parser_function or mgw_data.transclusion_modifier) then
return title.prefixedText
end
-- Also if "Template:" is necessary for disambiguation (e.g.
-- "Template:Category:Foo" can't be abbreviated to "Category:Foo").
local check = new_title(text, 10)
return check and title_equals(title, check) and text or title.prefixedText
end
export.getTemplateInvocationName = get_template_invocation_name
-- Returns whether a title is a redirect or not. Structured like this to
-- allow the use of pcall, since it will throw an error if the expensive
-- parser function limit has been reached.
local function is_redirect(title)
return title.isRedirect
end
function parse_template_name(name, has_args, fragment, force_transclusion)
local chunks, colon, start, n, p = {}, find(name, ":", 1, true), 1, 0, 0
while colon do
-- Pattern applies PHP ltrim.
local mgw_data = retrieve_magic_word_data(match(sub(name, start, colon - 1), "[^%z\t-\v\r ].*") or "")
if not mgw_data then
break
end
local priority = mgw_data.priority
if not (priority and priority > p) then
local pf = mgw_data.parser_function and mgw_data.name or nil
if pf then
n = n + 1
chunks[n] = pf .. ":"
return chunks, "parser function", sub(name, colon + 1)
end
break
end
n = n + 1
chunks[n] = mgw_data.name .. ":"
start, p = colon + 1, priority
colon = find(name, ":", start, true)
end
if start > 1 then
name = sub(name, start)
end
name = php_trim(name)
-- Parser variables can only take SUBST:/SAFESUBST: as modifiers.
if not has_args and p <= 1 then
local mgw_data = retrieve_magic_word_data(name)
local pv = mgw_data and mgw_data.parser_variable and mgw_data.name or nil
if pv then
n = n + 1
chunks[n] = pv
return chunks, "parser variable"
end
end
-- Handle relative template names.
if namespace_has_subpages then
-- If the name starts with "/", it's treated as a subpage of the
-- current page. Final slashes are trimmed, but this can't affect
-- the intervening slash (e.g. {{///}} refers to "{{PAGENAME}}/").
local initial = sub(name, 1, 1)
if initial == "/" then
name = raw_pagename .. (match(name, "^/.*[^/]") or "/")
-- If it starts with "../", trim it and any that follow, and go up
-- that many subpage levels. Then, treat any additional text as
-- a subpage of that page; final slashes are trimmed.
elseif initial == "." and sub(name, 2, 3) == "./" then
local n = 4
while sub(name, n, n + 2) == "../" do
n = n + 3
end
-- Retain an initial "/".
name = sub(name, n - 1)
-- Trim the relevant number of subpages from the pagename.
local pagename, i = reverse(raw_pagename), 0
for _ = 1, (n - 1) / 3 do
i = find(pagename, "/", i + 1, true)
-- Fail if there aren't enough slashes.
if not i then
return nil
end
end
-- Add the subpage text; since the intervening "/" is retained
-- in `name`, it can be trimmed along with any other final
-- slashes (e.g. {{..///}} refers to "{{BASEPAGENAME}}".)
name = reverse(sub(pagename, i + 1)) .. (match(name, "^.*[^/]") or "")
end
end
local title = new_title(name, 10)
if not is_valid_title(title) then
return nil
end
-- If `fragment` is set, save the original title's fragment, since it
-- won't carry through to any redirect targets.
if fragment then
fragment = title.fragment
end
-- Resolve any redirects. Note that is_valid_title treats interwiki
-- titles as invalid, which is correct in this case: if the redirect
-- target is an interwiki link, the template won't fail, but the
-- redirect does not get resolved (i.e. the redirect page itself gets
-- transcluded, so the template name should not be normalized to the
-- target). It also treats titles that only have fragments as invalid
-- (e.g. "#foo"), but these can't be used as redirects anyway.
-- title.redirectTarget increments the expensive parser function count,
-- but avoids extraneous transclusions polluting template lists and the
-- performance hit caused by indiscriminately grabbing redirectTarget.
-- However, if the expensive parser function limit has already been hit,
-- redirectTarget is used as a fallback. force_transclusion forces the
-- use of the fallback.
local redirect = true
if not force_transclusion then
local success, resolved = pcall(is_redirect, title)
if success and not resolved then
redirect = false
end
end
if redirect then
redirect = title.redirectTarget
if is_valid_title(redirect) then
title = redirect
end
end
local chunk = get_template_invocation_name(title)
-- Set the fragment (if applicable).
if fragment then
chunk = chunk .. "#" .. fragment
end
chunks[n + 1] = chunk
return chunks, "template"
end
-- Note: force_transclusion avoids incrementing the expensive parser
-- function count by forcing transclusion instead. This should only be used
-- when there is a real risk that the expensive parser function limit of
-- 500 will be hit.
local function process_name(self, frame_args, force_transclusion)
local name = expand(self[1], frame_args)
local has_args, norm = #self > 1
if not has_args then
norm = parser_variables[name]
if norm then
return norm, "parser variable"
end
end
norm = templates[name]
if norm then
local pf_arg1 = parser_functions[name]
return norm, pf_arg1 and "parser function" or "template", pf_arg1
elseif norm == false then
return nil
end
local chunks, subclass, pf_arg1 = parse_template_name(name, has_args, nil, force_transclusion)
-- Fail if invalid.
if not chunks then
templates[name] = false
return nil
end
local chunk1 = chunks[1]
-- Fail on SUBST:.
if chunk1 == "SUBST:" then
templates[name] = false
return nil
-- Any modifiers are ignored.
elseif subclass == "parser function" then
local pf = chunks[#chunks]
templates[name] = pf
parser_functions[name] = pf_arg1
return pf, "parser function", pf_arg1
end
-- Ignore SAFESUBST:, and treat MSGNW: as a parser function with the pagename as its first argument (ignoring any RAW: that comes after).
if chunks[chunk1 == "SAFESUBST:" and 2 or 1] == "MSGNW:" then
pf_arg1 = chunks[#chunks]
local pf = "MSGNW:"
templates[name] = pf
parser_functions[name] = pf_arg1
return pf, "parser function", pf_arg1
end
-- Ignore any remaining modifiers, as they've done their job.
local output = chunks[#chunks]
if subclass == "parser variable" then
parser_variables[name] = output
else
templates[name] = output
end
return output, subclass
end
function Template:get_name(frame_args, force_transclusion)
-- Only return the first return value.
return (process_name(self, frame_args, force_transclusion))
end
function Template:get_arguments(frame_args)
local name, subclass, pf_arg1 = process_name(self, frame_args)
if name == nil then
return nil
elseif subclass == "parser variable" then
return {}
end
local template_args = {}
if subclass == "parser function" then
template_args[1] = pf_arg1
for i = 2, #self do
template_args[i] = expand(self[i], frame_args) -- Not trimmed.
end
return template_args
end
local implicit = 0
for i = 2, #self do
local arg = self[i]
if class_else_type(arg) == "argument" then
template_args[scribunto_param_key(expand(arg[1], frame_args))] = php_trim(expand(arg[2], frame_args))
else
implicit = implicit + 1
template_args[implicit] = expand(arg, frame_args) -- Not trimmed.
end
end
return template_args
end
end
-- BIG TODO: manual template expansion.
function Template:expand()
return (frame or get_frame()):preprocess(tostring(self))
end
local Tag = Node:new_class("tag")
do
local php_htmlspecialchars_data
local function get_php_htmlspecialchars_data()
php_htmlspecialchars_data, get_php_htmlspecialchars_data = (data or get_data()).php_htmlspecialchars, nil
return php_htmlspecialchars_data
end
local function php_htmlspecialchars(str, compat)
return (gsub(str, compat and "[&\"<>]" or "[&\"'<>]", php_htmlspecialchars_data or get_php_htmlspecialchars_data()))
end
function Tag:__tostring()
local open_tag, attributes, n = {"<", self.name}, self:get_attributes(), 2
for attr, value in next, attributes do
n = n + 1
open_tag[n] = " " .. php_htmlspecialchars(attr) .. "=\"" .. php_htmlspecialchars(value, true) .. "\""
end
if self.self_closing then
return concat(open_tag) .. "/>"
end
return concat(open_tag) .. ">" .. concat(self) .. "</" .. self.name .. ">"
end
local valid_attribute_name
local function get_valid_attribute_name()
valid_attribute_name, get_valid_attribute_name = (data or get_data()).valid_attribute_name, nil
return valid_attribute_name
end
function Tag:get_attributes()
local raw = self.attributes
if not raw then
self.attributes = {}
return self.attributes
elseif type(raw) == "table" then
return raw
end
if sub(raw, -1) == "/" then
raw = sub(raw, 1, -2)
end
local attributes, head = {}, 1
-- Semi-manual implementation of the native regex.
while true do
local name, loc = match(raw, "([^\t\n\f\r />][^\t\n\f\r /=>]*)()", head)
if not name then
break
end
head = loc
local value
loc = match(raw, "^[\t\n\f\r ]*=[\t\n\f\r ]*()", head)
if loc then
head = loc
-- Either "", '' or the value ends on a space/at the end. Missing
-- end quotes are repaired by closing the value at the end.
value, loc = match(raw, "^\"([^\"]*)\"?()", head)
if not value then
value, loc = match(raw, "^'([^']*)'?()", head)
if not value then
value, loc = match(raw, "^([^\t\n\f\r ]*)()", head)
end
end
head = loc
end
-- valid_attribute_name is a pattern matching a valid attribute name.
-- Defined in the data due to its length - see there for more info.
if umatch(name, valid_attribute_name or get_valid_attribute_name()) then
-- Sanitizer applies PHP strtolower (ASCII-only).
attributes[lower(name)] = value and decode_entities(
php_trim((gsub(value, "[\t\n\r ]+", " ")))
) or ""
end
end
self.attributes = attributes
return attributes
end
end
function Tag:expand()
return (frame or get_frame()):preprocess(tostring(self))
end
local Heading = Node:new_class("heading")
function Heading:new(this)
if #this > 1 then
local success, str = pcall(concat, this)
if success then
return Node.new(self, {
str,
level = this.level,
section = this.section,
index = this.index
})
end
end
return Node.new(self, this)
end
function Heading:__tostring()
local eq = rep("=", self.level)
return eq .. Node.__tostring(self) .. eq
end
do
local expand_node = Node.expand
-- Expanded heading names can contain "\n" (e.g. inside nowiki tags), which
-- causes any heading containing them to fail. However, in such cases, the
-- native parser still treats it as a heading for the purpose of section
-- numbers.
local function validate_name(self, frame_args)
local name = expand_node(self, frame_args)
if find(name, "\n", 1, true) then
return nil
end
return name
end
function Heading:get_name(frame_args)
local name = validate_name(self, frame_args)
return name ~= nil and php_trim(name) or nil
end
-- FIXME: account for anchor disambiguation.
function Heading:get_anchor(frame_args)
local name = validate_name(self, frame_args)
return name ~= nil and decode_entities(anchor_encode(name)) or nil
end
function Heading:expand(frame_args)
local eq = rep("=", self.level)
return eq .. expand_node(self, frame_args) .. eq
end
end
------------------------------------------------------------------------------------
--
-- Parser
--
------------------------------------------------------------------------------------
function Parser:read(i, j)
local head, i = self.head, i or 0
return sub(self.text, head + i, head + (j or i))
end
function Parser:advance(n)
self.head = self.head + (n or self[-1].step or 1)
end
function Parser:jump(head)
self.head = head
self[-1].nxt = nil
end
function Parser:set_pattern(pattern)
local layer = self[-1]
layer.pattern = pattern
layer.nxt = nil
end
function Parser:consume()
local layer = self[-1]
local this = layer.nxt
if this then
layer.nxt = nil
else
local text, head = self.text, self.head
local loc1, loc2 = find(text, layer.pattern, head)
if loc1 == head or not loc1 then
this = sub(text, head, loc2)
else
this = sub(text, head, loc1 - 1)
layer.nxt = sub(text, loc1, loc2)
end
end
layer.step = #this
return layer.handler(self, this)
end
-- Template or parameter.
-- Parsed by matching the opening braces innermost-to-outermost (ignoring lone closing braces). Parameters {{{ }}} take priority over templates {{ }} where possible, but a double closing brace will always result in a closure, even if there are 3+ opening braces.
-- For example, "{{{{foo}}}}" (4) is parsed as a parameter enclosed by single braces, and "{{{{{foo}}}}}" (5) is a parameter inside a template. However, "{{{{{foo }} }}}" is a template inside a parameter, due to "}}" forcing the closure of the inner node.
do
-- Handlers.
local handle_name
local handle_argument
local function do_template_or_parameter(self, inner_node)
self:push_sublayer(handle_name)
self:set_pattern("[\n<[{|}]")
-- If a node has already been parsed, nest it at the start of the new
-- outer node (e.g. when parsing"{{{{foo}}bar}}", the template "{{foo}}"
-- is parsed first, since it's the innermost, and becomes the first
-- node of the outer template.
if inner_node then
self:emit(inner_node)
end
end
function handle_name(self, ...)
handle_name = self:switch(handle_name, {
["\n"] = Parser.heading_block,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
["|"] = function(self)
self:emit(Wikitext:new(self:pop_sublayer()))
self:push_sublayer(handle_argument)
self:set_pattern("[\n<=[{|}]")
end,
["}"] = function(self)
if self:read(1) == "}" then
self:emit(Wikitext:new(self:pop_sublayer()))
return self:pop()
end
self:emit("}")
end,
[""] = Parser.fail_route,
[false] = Parser.emit
})
return handle_name(self, ...)
end
function handle_argument(self, ...)
local function emit_argument(self)
local arg = Wikitext:new(self:pop_sublayer())
local layer = self[-1]
local key = layer.key
if key then
arg = Argument:new{key, arg}
layer.key = nil
end
self:emit(arg)
end
handle_argument = self:switch(handle_argument, {
["\n"] = function(self)
return self:heading_block("\n", self[-1].key and "=" or "==")
end,
["<"] = Parser.tag,
["="] = function(self)
local key = Wikitext:new(self:pop_sublayer())
self[-1].key = key
self:push_sublayer(handle_argument)
self:set_pattern("[\n<[{|}]")
end,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
["|"] = function(self)
emit_argument(self)
self:push_sublayer(handle_argument)
self:set_pattern("[\n<=[{|}]")
end,
["}"] = function(self)
if self:read(1) == "}" then
emit_argument(self)
return self:pop()
end
self:emit("}")
end,
[""] = Parser.fail_route,
[false] = Parser.emit
})
return handle_argument(self, ...)
end
function Parser:template_or_parameter()
local text, head, node_to_emit, failed = self.text, self.head
-- Comments/tags interrupt the brace count.
local braces = match(text, "^{+()", head) - head
self:advance(braces)
while true do
local success, node = self:try(do_template_or_parameter, node_to_emit)
-- Fail means no "}}" or "}}}" was found, so emit any remaining
-- unmatched opening braces before any templates/parameters that
-- were found.
if not success then
self:emit(rep("{", braces))
failed = true
break
-- If there are 3+ opening and closing braces, it's a parameter.
elseif braces >= 3 and self:read(2) == "}" then
self:advance(3)
braces = braces - 3
node = Parameter:new(node)
-- Otherwise, it's a template.
else
self:advance(2)
braces = braces - 2
node = Template:new(node)
end
local index = head + braces
node.index = index
node.raw = sub(text, index, self.head - 1)
node_to_emit = node
-- Terminate once not enough braces remain for further matches.
if braces == 0 then
break
-- Emit any stray opening brace before any matched nodes.
elseif braces == 1 then
self:emit("{")
break
end
end
if node_to_emit then
self:emit(node_to_emit)
end
return braces, failed
end
end
-- Tag.
do
local end_tags
local function get_end_tags()
end_tags, get_end_tags = (data or get_data()).end_tags, nil
return end_tags
end
-- Handlers.
local handle_start
local handle_tag
local function do_tag(self)
local layer = self[-1]
layer.handler, layer.index = handle_start, self.head
self:set_pattern("[%s/>]")
self:advance()
end
local function is_ignored_tag(self, this)
if self.transcluded then
return this == "includeonly"
end
return this == "noinclude" or this == "onlyinclude"
end
local function ignored_tag(self, text, head)
local loc = find(text, ">", head, true)
if not loc then
return self:fail_route()
end
self:jump(loc)
local tag = self:pop()
tag.ignored = true
return tag
end
function handle_start(self, this)
if this == "/" then
local text, head = self.text, self.head + 1
local this = match(text, "^[^%s/>]+", head)
if this and is_ignored_tag(self, lower(this)) then
head = head + #this
if not match(text, "^/[^>]", head) then
return ignored_tag(self, text, head)
end
end
return self:fail_route()
elseif this == "" then
return self:fail_route()
end
-- Tags are only case-insensitive with ASCII characters.
local raw_name = this
this = lower(this)
local end_tag_pattern = (end_tags or get_end_tags())[this]
if not end_tag_pattern then -- Validity check.
return self:fail_route()
end
local layer = self[-1]
local text, head = self.text, self.head + layer.step
if match(text, "^/[^>]", head) then
return self:fail_route()
elseif is_ignored_tag(self, this) then
return ignored_tag(self, text, head)
-- If an onlyinclude tag is not ignored (and cannot be active since it
-- would have triggered special handling earlier), it must be plaintext.
elseif this == "onlyinclude" then
return self:fail_route()
elseif this == "noinclude" or this == "includeonly" then
layer.ignored = true -- Ignored block.
layer.raw_name = raw_name
end
layer.name, layer.handler, layer.end_tag_pattern = this, handle_tag, end_tag_pattern
self:set_pattern(">")
end
function handle_tag(self, this)
if this == "" then
return self:fail_route()
elseif this ~= ">" then
self[-1].attributes = this
return
elseif self:read(-1) == "/" then
self[-1].self_closing = true
return self:pop()
end
local text, head, layer = self.text, self.head + 1, self[-1]
local loc1, loc2 = find(text, layer.end_tag_pattern, head)
if loc1 then
if loc1 > head then
self:emit(sub(text, head, loc1 - 1))
end
self:jump(loc2)
return self:pop()
-- noinclude and includeonly will tolerate having no closing tag, but
-- only if given in lowercase. This is due to a preprocessor bug, as
-- it uses a regex with the /i (case-insensitive) flag to check for
-- end tags, but a simple array lookup with lowercase tag names when
-- looking up which tags should tolerate no closing tag (exact match
-- only, so case-sensitive).
elseif layer.ignored then
local raw_name = layer.raw_name
if raw_name == "noinclude" or raw_name == "includeonly" then
self:jump(#text)
return self:pop()
end
end
return self:fail_route()
end
function Parser:tag()
-- HTML comment.
if self:read(1, 3) == "!--" then
local text = self.text
self:jump(select(2, find(text, "-->", self.head + 4, true)) or #text)
-- onlyinclude tags (which must be lowercase with no whitespace).
elseif self.onlyinclude and self:read(1, 13) == "/onlyinclude>" then
local text = self.text
self:jump(select(2, find(text, "<onlyinclude>", self.head + 14, true)) or #text)
else
local success, tag = self:try(do_tag)
if not success then
self:emit("<")
elseif not tag.ignored then
tag.end_tag_pattern = nil
self:emit(Tag:new(tag))
end
end
end
end
-- Heading.
-- The preparser assigns each heading a number, which is used for things like section edit links. The preparser will only do this for heading blocks which aren't nested inside templates, parameters and parser tags. In some cases (e.g. when template blocks contain untrimmed newlines), a preparsed heading may not be treated as a heading in the final output. That does not affect the preparser, however, which will always count sections based on the preparser heading count, since it can't know what a template's final output will be.
do
-- Handlers.
local handle_start
local handle_body
local handle_possible_end
local function do_heading(self)
local layer, head = self[-1], self.head
layer.handler, layer.index = handle_start, head
self:set_pattern("[\t\n ]")
-- Comments/tags interrupt the equals count.
local eq = match(self.text, "^=+()", head) - head
layer.level = eq
self:advance(eq)
end
local function do_heading_possible_end(self)
local layer = self[-1]
layer.handler = handle_possible_end
self:set_pattern("[\n<]")
end
function handle_start(self, ...)
-- ===== is "=" as an L2; ======== is "==" as an L3 etc.
local function newline(self)
local layer = self[-1]
local eq = layer.level
if eq <= 2 then
return self:fail_route()
end
-- Calculate which equals signs determine the heading level.
local level_eq = eq - (2 - eq % 2)
level_eq = level_eq > 12 and 12 or level_eq
-- Emit the excess.
self:emit(rep("=", eq - level_eq))
layer.level = level_eq / 2
return self:pop()
end
local function whitespace(self)
local success, possible_end = self:try(do_heading_possible_end)
if success then
self:emit(Wikitext:new(possible_end))
local layer = self[-1]
layer.handler = handle_body
self:set_pattern("[\n<=[{]")
return self:consume()
end
return newline(self)
end
handle_start = self:switch(handle_start, {
["\t"] = whitespace,
["\n"] = newline,
[" "] = whitespace,
[""] = newline,
[false] = function(self)
-- Emit any excess = signs once we know it's a conventional heading. Up till now, we couldn't know if the heading is just a string of = signs (e.g. ========), so it wasn't guaranteed that the heading text starts after the 6th.
local layer = self[-1]
local eq = layer.level
if eq > 6 then
self:emit(1, rep("=", eq - 6))
layer.level = 6
end
layer.handler = handle_body
self:set_pattern("[\n<=[{]")
return self:consume()
end
})
return handle_start(self, ...)
end
function handle_body(self, ...)
handle_body = self:switch(handle_body, {
["\n"] = Parser.fail_route,
["<"] = Parser.tag,
["="] = function(self)
-- Comments/tags interrupt the equals count.
local eq = match(self.text, "^=+", self.head)
local eq_len = #eq
self:advance(eq_len)
local success, possible_end = self:try(do_heading_possible_end)
if success then
self:emit(eq)
self:emit(Wikitext:new(possible_end))
return self:consume()
end
local layer = self[-1]
local level = layer.level
if eq_len > level then
self:emit(rep("=", eq_len - level))
elseif level > eq_len then
layer.level = eq_len
self:emit(1, rep("=", level - eq_len))
end
return self:pop()
end,
["["] = Parser.wikilink_block,
["{"] = function(self, this)
return self:braces("{", true)
end,
[""] = Parser.fail_route,
[false] = Parser.emit
})
return handle_body(self, ...)
end
function handle_possible_end(self, ...)
handle_possible_end = self:switch(handle_possible_end, {
["\n"] = Parser.fail_route,
["<"] = function(self)
if self:read(1, 3) ~= "!--" then
return self:pop()
end
local head = select(2, find(self.text, "-->", self.head + 4, true))
if not head then
return self:pop()
end
self:jump(head)
end,
[""] = Parser.fail_route,
[false] = function(self, this)
if not match(this, "^[\t ]+()$") then
return self:pop()
end
self:emit(this)
end
})
return handle_possible_end(self, ...)
end
function Parser:heading()
local success, heading = self:try(do_heading)
if success then
local section = self.section + 1
heading.section = section
self.section = section
self:emit(Heading:new(heading))
return self:consume()
else
self:emit("=")
end
end
end
------------------------------------------------------------------------------------
--
-- Block handlers
--
------------------------------------------------------------------------------------
-- Block handlers.
-- These are blocks which can affect template/parameter parsing, since they're also parsed by Parsoid at the same time (even though they aren't processed until later).
-- All blocks (including templates/parameters) can nest inside each other, but an inner block must be closed before the outer block which contains it. This is why, for example, the wikitext "{{template| [[ }}" will result in an unprocessed template, since the inner "[[" is treated as the opening of a wikilink block, which prevents "}}" from being treated as the closure of the template block. On the other hand, "{{template| [[ ]] }}" will process correctly, since the wikilink block is closed before the template closure. It makes no difference whether the block will be treated as valid or not when it's processed later on, so "{{template| [[ }} ]] }}" would also work, even though "[[ }} ]]" is not a valid wikilink.
-- Note that nesting also affects pipes and equals signs, in addition to block closures.
-- These blocks can be nested to any degree, so "{{template| [[ [[ [[ ]] }}" will not work, since only one of the three wikilink blocks has been closed. On the other hand, "{{template| [[ [[ [[ ]] ]] ]] }}" will work.
-- All blocks are implicitly closed by the end of the text, since their validity is irrelevant at this stage.
-- Language conversion block.
-- Opens with "-{" and closes with "}-". However, templates/parameters take priority, so "-{{" is parsed as "-" followed by the opening of a template/parameter block (depending on what comes after).
-- Note: Language conversion blocks aren't actually enabled on the English Wiktionary, but Parsoid still parses them at this stage, so they can affect the closure of outer blocks: e.g. "[[ -{ ]]" is not a valid wikilink block, since the "]]" falls inside the new language conversion block.
do
--Handler.
local handle_language_conversion_block
local function do_language_conversion_block(self)
local layer = self[-1]
layer.handler = handle_language_conversion_block
self:set_pattern("[\n<[{}]")
end
function handle_language_conversion_block(self, ...)
handle_language_conversion_block = self:switch(handle_language_conversion_block, {
["\n"] = Parser.heading_block,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
["}"] = function(self)
if self:read(1) == "-" then
self:emit("}-")
self:advance()
return self:pop()
end
self:emit("}")
end,
[""] = Parser.pop,
[false] = Parser.emit
})
return handle_language_conversion_block(self, ...)
end
function Parser:braces(this, fail_on_unclosed_braces)
local language_conversion_block = self:read(-1) == "-"
if self:read(1) == "{" then
local braces, failed = self:template_or_parameter()
-- Headings will fail if they contain an unclosed brace block.
if failed and fail_on_unclosed_braces then
return self:fail_route()
-- Language conversion blocks cannot begin "-{{", but can begin
-- "-{{{" iff parsed as "-{" + "{{".
elseif not (language_conversion_block and braces == 1) then
return self:consume()
end
else
self:emit(this)
if not language_conversion_block then
return
end
self:advance()
end
self:emit(Wikitext:new(self:get(do_language_conversion_block)))
end
end
--[==[
Headings
Opens with "\n=" (or "=" at the start of the text), and closes with "\n" or the end of the text. Note that it doesn't matter whether the heading will fail to process due to a premature newline (e.g. if there are no closing signs), so at this stage the only thing that matters for closure is the newline or end of text.
Note: Heading blocks are only parsed like this if they occur inside a template, since they do not iterate the preparser's heading count (i.e. they aren't proper headings).
Note 2: if directly inside a template argument with no previous equals signs, a newline followed by a single equals sign is parsed as an argument equals sign, not the opening of a new L1 heading block. This does not apply to any other heading levels. As such, {{template|key\n=}}, {{template|key\n=value}} or even {{template|\n=}} will successfully close, but {{template|key\n==}}, {{template|key=value\n=more value}}, {{template\n=}} etc. will not, since in the latter cases the "}}" would fall inside the new heading block.
]==]
do
--Handler.
local handle_heading_block
local function do_heading_block(self)
local layer = self[-1]
layer.handler = handle_heading_block
self:set_pattern("[\n<[{]")
end
function handle_heading_block(self, ...)
handle_heading_block = self:switch(handle_heading_block, {
["\n"] = function(self)
self:newline()
return self:pop()
end,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["{"] = Parser.braces,
[""] = Parser.pop,
[false] = Parser.emit
})
return handle_heading_block(self, ...)
end
function Parser:heading_block(this, nxt)
self:newline()
this = this .. (nxt or "=")
local loc = #this - 1
while self:read(0, loc) == this do
self:advance()
self:emit(Wikitext:new(self:get(do_heading_block)))
end
end
end
-- Wikilink block.
-- Opens with "[[" and closes with "]]".
do
-- Handler.
local handle_wikilink_block
local function do_wikilink_block(self)
local layer = self[-1]
layer.handler = handle_wikilink_block
self:set_pattern("[\n<[%]{]")
end
function handle_wikilink_block(self, ...)
handle_wikilink_block = self:switch(handle_wikilink_block, {
["\n"] = Parser.heading_block,
["<"] = Parser.tag,
["["] = Parser.wikilink_block,
["]"] = function(self)
if self:read(1) == "]" then
self:emit("]]")
self:advance()
return self:pop()
end
self:emit("]")
end,
["{"] = Parser.braces,
[""] = Parser.pop,
[false] = Parser.emit
})
return handle_wikilink_block(self, ...)
end
function Parser:wikilink_block()
if self:read(1) == "[" then
self:emit("[[")
self:advance(2)
self:emit(Wikitext:new(self:get(do_wikilink_block)))
else
self:emit("[")
end
end
end
-- Lines which only contain comments, " " and "\t" are eaten, so long as
-- they're bookended by "\n" (i.e. not the first or last line).
function Parser:newline()
local text, head = self.text, self.head
while true do
repeat
local loc = match(text, "^[\t ]*<!%-%-()", head + 1)
if not loc then
break
end
loc = select(2, find(text, "-->", loc, true))
head = loc or head
until not loc
-- Fail if no comments found.
if head == self.head then
break
end
head = match(text, "^[\t ]*()\n", head + 1)
if not head then
break
end
self:jump(head)
end
self:emit("\n")
end
do
-- Handlers.
local handle_start
local main_handler
-- If `transcluded` is true, then the text is checked for a pair of
-- onlyinclude tags. If these are found (even if they're in the wrong
-- order), then the start of the page is treated as though it is preceded
-- by a closing onlyinclude tag.
-- Note 1: unlike other parser extension tags, onlyinclude tags are case-
-- sensitive and cannot contain whitespace.
-- Note 2: onlyinclude tags *can* be implicitly closed by the end of the
-- text, but the hard requirement above means this can only happen if
-- either the tags are in the wrong order or there are multiple onlyinclude
-- blocks.
local function do_parse(self, transcluded)
local layer = self[-1]
layer.handler = handle_start
self:set_pattern(".")
self.section = 0
if not transcluded then
return
end
self.transcluded = true
local text = self.text
if find(text, "</onlyinclude>", 1, true) then
local head = find(text, "<onlyinclude>", 1, true)
if head then
self.onlyinclude = true
self:jump(head + 13)
end
end
end
-- If the first character is "=", try parsing it as a heading.
function handle_start(self, this)
local layer = self[-1]
layer.handler = main_handler
self:set_pattern("[\n<{]")
if this == "=" then
return self:heading()
end
return self:consume()
end
function main_handler(self, ...)
main_handler = self:switch(main_handler, {
["\n"] = function(self)
self:newline()
if self:read(1) == "=" then
self:advance()
return self:heading()
end
end,
["<"] = Parser.tag,
["{"] = function(self)
if self:read(1) == "{" then
self:template_or_parameter()
return self:consume()
end
self:emit("{")
end,
[""] = Parser.pop,
[false] = Parser.emit
})
return main_handler(self, ...)
end
function export.parse(text, transcluded)
local text_type = type(text)
return (select(2, Parser:parse{
text = text_type == "string" and text or
text_type == "number" and tostring(text) or
error("bad argument #1 (string expected, got " .. text_type .. ")"),
node = {Wikitext, true},
route = {do_parse, transcluded}
}))
end
parse = export.parse
end
do
local function next_template(iter)
while true do
local node = iter()
if node == nil or class_else_type(node) == "template" then
return node
end
end
end
function export.find_templates(text, not_transcluded)
return next_template, parse(text, not not_transcluded):__pairs("next_node")
end
end
do
local link_parameter_1, link_parameter_2
local function get_link_parameter_1()
link_parameter_1, get_link_parameter_1 = (data or get_data()).template_link_param_1, nil
return link_parameter_1
end
local function get_link_parameter_2()
link_parameter_2, get_link_parameter_2 = (data or get_data()).template_link_param_2, nil
return link_parameter_2
end
-- Generate a link. If the target title doesn't have a fragment, use "#top"
-- (which is an implicit anchor at the top of every page), as this ensures
-- self-links still display as links, since bold display is distracting and
-- unintuitive for template links.
local function link_page(title, display)
local fragment = title.fragment
if fragment == "" then
fragment = "top"
end
return format(
"[[:%s|%s]]",
encode_uri(title.prefixedText .. "#" .. fragment, "WIKI"),
display
)
end
-- pf_arg1 or pf_arg2 may need to be linked if a given parser function
-- treats them as a pagename. If a key exists in `namespace`, the value is
-- the namespace for the page: if not 0, then the namespace prefix will
-- always be added to the input (e.g. {{#invoke:}} can only target the
-- Module: namespace, so inputting "Template:foo" gives
-- "Module:Template:foo", and "Module:foo" gives "Module:Module:foo").
-- However, this isn't possible with mainspace (namespace 0), so prefixes
-- are respected. make_title handles all of this automatically.
local function finalize_arg(pagename, namespace)
if namespace == nil then
return pagename
end
local title = make_title(namespace, pagename)
if not (title and is_valid_title(title)) then
return pagename
end
return link_page(title, pagename)
end
local function render_title(name, args)
-- parse_template_name returns a table of transclusion modifiers plus
-- the normalized template/magic word name, which will be used as link
-- targets. The third return value pf_arg1 is the first argument of a
-- a parser function, which comes after the colon (e.g. "foo" in
-- "{{#IF:foo|bar|baz}}"). This means args[1] (i.e. the first argument
-- that comes after a pipe is actually argument 2, and so on. Note: the
-- second parameter of parse_template_name checks if there are any
-- arguments, since parser variables cannot take arguments (e.g.
-- {{CURRENTYEAR}} is a parser variable, but {{CURRENTYEAR|foo}}
-- transcludes "Template:CURRENTYEAR"). In such cases, the returned
-- table explicitly includes the "Template:" prefix in the template
-- name. The third parameter instructs it to retain any fragment in the
-- template name in the returned table, if present.
local chunks, subclass, pf_arg1 = parse_template_name(
name,
args and pairs(args)(args) ~= nil,
true
)
if chunks == nil then
return name
end
local chunks_len = #chunks
-- Additionally, generate the corresponding table `rawchunks`, which
-- is a list of colon-separated chunks in the raw input. This is used
-- to retrieve the display forms for each chunk.
local rawchunks = split(name, ":")
for i = 1, chunks_len - 1 do
chunks[i] = format(
"[[%s|%s]]",
encode_uri((magic_words or get_magic_words())[sub(chunks[i], 1, -2)].transclusion_modifier, "WIKI"),
rawchunks[i]
)
end
local chunk = chunks[chunks_len]
-- If it's a template, return a link to it with link_page, concatenating
-- the remaining chunks in `rawchunks` to form the display text.
-- Use new_title with the default namespace 10 (Template:) to generate
-- a target title, which is the same setting used for retrieving
-- templates (including those in other namespaces, as prefixes override
-- the default).
if subclass == "template" then
chunks[chunks_len] = link_page(
new_title(chunk, 10),
concat(rawchunks, ":", chunks_len) -- :
)
return concat(chunks, ":") -- :
elseif subclass == "parser variable" then
chunks[chunks_len] = format(
"[[%s|%s]]",
encode_uri((magic_words or get_magic_words())[chunk].parser_variable, "WIKI"),
rawchunks[chunks_len]
)
return concat(chunks, ":") -- :
end
-- Otherwise, it must be a parser function.
local mgw_data = (magic_words or get_magic_words())[sub(chunk, 1, -2)]
local link = mgw_data.parser_function or mgw_data.transclusion_modifier
local pf_arg2 = args and args[1] or nil
-- Some magic words have different links, depending on whether argument
-- 2 is specified (e.g. "baz" in {{foo:bar|baz}}).
if type(link) == "table" then
link = pf_arg2 and link[2] or link[1]
end
chunks[chunks_len] = format(
"[[%s|%s]]",
encode_uri(link, "WIKI"),
rawchunks[chunks_len]
)
-- #TAG: has special handling, because documentation links for parser
-- extension tags come from [[Module:data/parser extension tags]].
if chunk == "#TAG:" then
-- Tags are only case-insensitive with ASCII characters.
local tag = (parser_extension_tags or get_parser_extension_tags())[lower(php_trim(pf_arg1))]
if tag then
pf_arg1 = format(
"[[%s|%s]]",
encode_uri(tag, "WIKI"),
pf_arg1
)
end
-- Otherwise, finalize pf_arg1 and add it to `chunks`.
else
pf_arg1 = finalize_arg(pf_arg1, (link_parameter_1 or get_link_parameter_1())[chunk])
end
chunks[chunks_len + 1] = pf_arg1
-- Finalize pf_arg2 (if applicable), then return.
if pf_arg2 then
args[1] = finalize_arg(pf_arg2, (link_parameter_2 or get_link_parameter_2())[chunk])
end
return concat(chunks, ":") -- :
end
function export.buildTemplate(title, args)
local output = {title}
-- Iterate over all numbered parameters in order, followed by any
-- remaining parameters in codepoint order. Implicit parameters are
-- used wherever possible, even if explicit numbers are interpolated
-- between them (e.g. 0 would go before any implicit parameters, and
-- 2.5 between 2 and 3).
-- TODO: handle "=" and "|" in params/values.
if args then
local iter, implicit = sorted_pairs(args), table_len(args)
local k, v = iter()
while k ~= nil do
if type(k) == "number" and k >= 1 and k <= implicit and k % 1 == 0 then
insert(output, v)
else
insert(output, k .. "=" .. v)
end
k, v = iter()
end
end
return output
end
build_template = export.buildTemplate
function export.templateLink(title, args, no_link)
local output = build_template(no_link and title or render_title(title, args), args)
for i = 1, #output do
output[i] = encode_entities(output[i], "={}", true, true)
end
return tostring(html_create("code")
:css("white-space", "pre-wrap")
:wikitext("{{" .. concat(output, "|") .. "}}") -- {{ | }}
)
end
end
do
local function next_parameter(iter)
while true do
local node = iter()
if node == nil or class_else_type(node) == "parameter" then
return node
end
end
end
function export.find_parameters(text, not_transcluded)
return next_parameter, parse(text, not not_transcluded):__pairs("next_node")
end
function export.displayParameter(name, default)
return tostring(html_create("code")
:css("white-space", "pre-wrap")
:wikitext("{{{" .. concat({name, default}, "|") .. "}}}") -- {{{ | }}}
)
end
end
do
local function check_level(level)
if type(level) ~= "number" then
error("Heading levels must be numbers.")
elseif level < 1 or level > 6 or level % 1 ~= 0 then
error("Heading levels must be integers between 1 and 6.")
end
return level
end
local function next_heading(iter)
while true do
local node = iter()
if node == nil then
return nil
elseif class_else_type(node) == "heading" then
local level = node.level
if level >= iter.i and level <= iter.j then
return node
end
end
end
end
-- FIXME: should headings which contain "\n" be returned? This may depend
-- on variable factors, like template expansion. They iterate the heading
-- count number, but fail on rendering. However, in some cases a different
-- heading might still be rendered due to intermediate equals signs; it
-- may even be of a different heading level: e.g., this is parsed as an
-- L2 heading with a newline (due to the wikilink block), but renders as the
-- L1 heading "=foo[[". Section edit links are sometimes (but not always)
-- present in such cases.
-- ==[[=
-- ]]==
-- TODO: section numbers for edit links seem to also include headings
-- nested inside templates and parameters (but apparently not those in
-- parser extension tags - need to test this more). If we ever want to add
-- section edit links manually, this will need to be accounted for.
function export.find_headings(text, i, j)
local iter = parse(text):__pairs("next_node")
iter.i, iter.j = i and check_level(i) or 1, j and check_level(j) or 6
return next_heading, iter
end
end
do
local function make_tag(tag)
return tostring(html_create("code")
:css("white-space", "pre-wrap")
:wikitext("<" .. tag .. ">")
)
end
-- Note: invalid tags are returned without links.
function export.wikitagLink(tag)
-- ">" can't appear in tags (including attributes) since the parser
-- unconditionally treats ">" as the end of a tag.
if find(tag, ">", 1, true) then
return make_tag(tag)
end
-- Tags must start "<tagname..." or "</tagname...", with no whitespace
-- after "<" or "</".
local slash, tagname, remainder = match(tag, "^(/?)([^/%s]+)(.*)$")
if not tagname then
return make_tag(tag)
end
-- Tags are only case-insensitive with ASCII characters.
local link = lower(tagname)
if (
-- onlyinclude tags must be lowercase and are whitespace intolerant.
link == "onlyinclude" and (link ~= tagname or remainder ~= "") or
-- Closing wikitags (except onlyinclude) can only have whitespace
-- after the tag name.
slash == "/" and not match(remainder, "^%s*()$") or
-- Tagnames cannot be followed immediately by "/", unless it comes
-- at the end (e.g. "<nowiki/>", but not "<nowiki/ >").
remainder ~= "/" and sub(remainder, 1, 1) == "/"
) then
-- Output with no link.
return make_tag(tag)
end
-- Partial transclusion tags aren't in the table of parser extension
-- tags.
if link == "noinclude" or link == "includeonly" or link == "onlyinclude" then
link = "mw:Transclusion#Partial transclusion"
else
link = (parser_extension_tags or get_parser_extension_tags())[link]
end
if link then
tag = gsub(tag, pattern_escape(tagname), "[[" .. replacement_escape(encode_uri(link, "WIKI")) .. "|%0]]", 1)
end
return make_tag(tag)
end
end
-- For convenience.
export.class_else_type = class_else_type
return export
im5kt08n2nmkvg31rbmxdtxlhku8ghm
Module:template parser/data
828
15137
193448
182562
2024-10-13T05:53:13Z
en>Theknightwho
0
193448
Scribunto
text/plain
local string = string
local gsub = string.gsub
local upper = string.upper
local data = {}
do
local tags = mw.loadData("Module:data/parser extension tags")
local data_end_tags = {}
-- The preprocessor uses the regex "/<\/TAG\s*>/i", so only ASCII characters
-- are case-insensitive.
local function char_pattern(ch)
local upper_ch = upper(ch)
return upper_ch == ch and ch or "[" .. upper(ch) .. ch .. "]"
end
-- Generates the string pattern for the end tag.
local function end_tag_pattern(tag)
data_end_tags[tag] = "</" .. gsub(tag, "[^\128-\255]", char_pattern) .. "%s*>"
end
for tag in pairs(tags) do
end_tag_pattern(tag)
end
end_tag_pattern("includeonly")
end_tag_pattern("noinclude")
data_end_tags["onlyinclude"] = true -- Pattern is not required, but a key is needed for tag validity checks.
data.end_tags = data_end_tags
end
-- Character escapes from PHP's htmlspecialchars.
data.php_htmlspecialchars = {
["\""] = """,
["&"] = "&",
["'"] = "'",
["<"] = "<",
[">"] = ">",
}
-- The parser's HTML sanitizer validates tag attributes with the regex
-- "/^([:_\p{L}\p{N}][:_\.\-\p{L}\p{N}]*)$/sxu". Ustring's "%w" is defined as
-- "[\p{L}\p{Nd}]", so any characters in \p{N} but not \p{Nd} must be added
-- manually.
-- NOTE: \p{N} *MUST* be defined according to the same version of Unicode that
-- the sanitizer uses in order to remain in sync. As of September 2024, this is
-- version 11.0.
local N_not_Nd = "\194\178" .. -- U+00B2
"\194\179" .. -- U+00B3
"\194\185" .. -- U+00B9
"\194\188-\194\190" .. -- U+00BC-U+00BE
"\224\167\180-\224\167\185" .. -- U+09F4-U+09F9
"\224\173\178-\224\173\183" .. -- U+0B72-U+0B77
"\224\175\176-\224\175\178" .. -- U+0BF0-U+0BF2
"\224\177\184-\224\177\190" .. -- U+0C78-U+0C7E
"\224\181\152-\224\181\158" .. -- U+0D58-U+0D5E
"\224\181\176-\224\181\184" .. -- U+0D70-U+0D78
"\224\188\170-\224\188\179" .. -- U+0F2A-U+0F33
"\225\141\169-\225\141\188" .. -- U+1369-U+137C
"\225\155\174-\225\155\176" .. -- U+16EE-U+16F0
"\225\159\176-\225\159\185" .. -- U+17F0-U+17F9
"\225\167\154" .. -- U+19DA
"\226\129\176" .. -- U+2070
"\226\129\180-\226\129\185" .. -- U+2074-U+2079
"\226\130\128-\226\130\137" .. -- U+2080-U+2089
"\226\133\144-\226\134\130" .. -- U+2150-U+2182
"\226\134\133-\226\134\137" .. -- U+2185-U+2189
"\226\145\160-\226\146\155" .. -- U+2460-U+249B
"\226\147\170-\226\147\191" .. -- U+24EA-U+24FF
"\226\157\182-\226\158\147" .. -- U+2776-U+2793
"\226\179\189" .. -- U+2CFD
"\227\128\135" .. -- U+3007
"\227\128\161-\227\128\169" .. -- U+3021-U+3029
"\227\128\184-\227\128\186" .. -- U+3038-U+303A
"\227\134\146-\227\134\149" .. -- U+3192-U+3195
"\227\136\160-\227\136\169" .. -- U+3220-U+3229
"\227\137\136-\227\137\143" .. -- U+3248-U+324F
"\227\137\145-\227\137\159" .. -- U+3251-U+325F
"\227\138\128-\227\138\137" .. -- U+3280-U+3289
"\227\138\177-\227\138\191" .. -- U+32B1-U+32BF
"\234\155\166-\234\155\175" .. -- U+A6E6-U+A6EF
"\234\160\176-\234\160\181" .. -- U+A830-U+A835
"\240\144\132\135-\240\144\132\179" .. -- U+10107-U+10133
"\240\144\133\128-\240\144\133\184" .. -- U+10140-U+10178
"\240\144\134\138" .. -- U+1018A
"\240\144\134\139" .. -- U+1018B
"\240\144\139\161-\240\144\139\187" .. -- U+102E1-U+102FB
"\240\144\140\160-\240\144\140\163" .. -- U+10320-U+10323
"\240\144\141\129" .. -- U+10341
"\240\144\141\138" .. -- U+1034A
"\240\144\143\145-\240\144\143\149" .. -- U+103D1-U+103D5
"\240\144\161\152-\240\144\161\159" .. -- U+10858-U+1085F
"\240\144\161\185-\240\144\161\191" .. -- U+10879-U+1087F
"\240\144\162\167-\240\144\162\175" .. -- U+108A7-U+108AF
"\240\144\163\187-\240\144\163\191" .. -- U+108FB-U+108FF
"\240\144\164\150-\240\144\164\155" .. -- U+10916-U+1091B
"\240\144\166\188" .. -- U+109BC
"\240\144\166\189" .. -- U+109BD
"\240\144\167\128-\240\144\167\143" .. -- U+109C0-U+109CF
"\240\144\167\146-\240\144\167\191" .. -- U+109D2-U+109FF
"\240\144\169\128-\240\144\169\136" .. -- U+10A40-U+10A48
"\240\144\169\189" .. -- U+10A7D
"\240\144\169\190" .. -- U+10A7E
"\240\144\170\157-\240\144\170\159" .. -- U+10A9D-U+10A9F
"\240\144\171\171-\240\144\171\175" .. -- U+10AEB-U+10AEF
"\240\144\173\152-\240\144\173\159" .. -- U+10B58-U+10B5F
"\240\144\173\184-\240\144\173\191" .. -- U+10B78-U+10B7F
"\240\144\174\169-\240\144\174\175" .. -- U+10BA9-U+10BAF
"\240\144\179\186-\240\144\179\191" .. -- U+10CFA-U+10CFF
"\240\144\185\160-\240\144\185\190" .. -- U+10E60-U+10E7E
"\240\144\188\157-\240\144\188\166" .. -- U+10F1D-U+10F26
"\240\144\189\145-\240\144\189\148" .. -- U+10F51-U+10F54
"\240\145\129\146-\240\145\129\165" .. -- U+11052-U+11065
"\240\145\135\161-\240\145\135\180" .. -- U+111E1-U+111F4
"\240\145\156\186" .. -- U+1173A
"\240\145\156\187" .. -- U+1173B
"\240\145\163\170-\240\145\163\178" .. -- U+118EA-U+118F2
"\240\145\177\154-\240\145\177\172" .. -- U+11C5A-U+11C6C
"\240\146\144\128-\240\146\145\174" .. -- U+12400-U+1246E
"\240\150\173\155-\240\150\173\161" .. -- U+16B5B-U+16B61
"\240\150\186\128-\240\150\186\150" .. -- U+16E80-U+16E96
"\240\157\139\160-\240\157\139\179" .. -- U+1D2E0-U+1D2F3
"\240\157\141\160-\240\157\141\184" .. -- U+1D360-U+1D378
"\240\158\163\135-\240\158\163\143" .. -- U+1E8C7-U+1E8CF
"\240\158\177\177-\240\158\178\171" .. -- U+1EC71-U+1ECAB
"\240\158\178\173-\240\158\178\175" .. -- U+1ECAD-U+1ECAF
"\240\158\178\177-\240\158\178\180" .. -- U+1ECB1-U+1ECB4
"\240\159\132\128-\240\159\132\140" -- U+1F100-U+1F10C
data.valid_attribute_name = "^[:_%w" .. N_not_Nd .."][:_.%-%w" .. N_not_Nd .. "]*$"
-- Value is the namespace number of the linked page at parameter 0, where 0 is mainspace.
-- If the namespace is the mainspace, it can be overridden by an explicitly specified category (e.g. {{PAGENAME:Category:Foo}} refers to "Category:Foo"). This does not apply to any other namespace (e.g. {{#SPECIAL:Category:Foo}} refers to "Special:Category:Foo").
data.template_link_param_1 = {
["#CATEGORYTREE:"] = 14, -- Category:
["#IFEXIST:"] = 0,
["#INVOKE:"] = 828, -- Module:
["#LST:"] = 0,
["#LSTH:"] = 0,
["#LSTX:"] = 0,
["#SPECIAL:"] = -1, -- Special:
["#SPECIALE:"] = -1, -- Special:
["#TITLEPARTS:"] = 0,
["BASEPAGENAME:"] = 0,
["BASEPAGENAMEE:"] = 0,
["CANONICALURL:"] = 0,
["CANONICALURLE:"] = 0,
["CASCADINGSOURCES:"] = 0,
["FILEPATH:"] = 6, -- File:
["FULLPAGENAME:"] = 0,
["FULLPAGENAMEE:"] = 0,
["FULLURL:"] = 0,
["FULLURLE:"] = 0,
["INT:"] = 8, -- MediaWiki:
["LOCALURL:"] = 0,
["LOCALURLE:"] = 0,
["NAMESPACE:"] = 0,
["NAMESPACEE:"] = 0,
["NAMESPACENUMBER:"] = 0,
["PAGEID:"] = 0,
["PAGENAME:"] = 0,
["PAGENAMEE:"] = 0,
["PAGESINCATEGORY:"] = 14, -- Category:
["PAGESIZE:"] = 0,
["REVISIONDAY:"] = 0,
["REVISIONDAY2:"] = 0,
["REVISIONID:"] = 0,
["REVISIONMONTH:"] = 0,
["REVISIONMONTH1:"] = 0,
["REVISIONTIMESTAMP:"] = 0,
["REVISIONUSER:"] = 0,
["REVISIONYEAR:"] = 0,
["ROOTPAGENAME:"] = 0,
["ROOTPAGENAMEE:"] = 0,
["SUBJECTPAGENAME:"] = 0,
["SUBJECTPAGENAMEE:"] = 0,
["SUBJECTSPACE:"] = 0,
["SUBJECTSPACEE:"] = 0,
["SUBPAGENAME:"] = 0,
["SUBPAGENAMEE:"] = 0,
["TALKPAGENAME:"] = 0,
["TALKPAGENAMEE:"] = 0,
["TALKSPACE:"] = 0,
["TALKSPACEE:"] = 0,
}
-- Value is the namespace number of the linked page at parameter 1.
data.template_link_param_2 = {
["PROTECTIONEXPIRY:"] = 0,
["PROTECTIONLEVEL:"] = 0,
}
return data
k8hhdejnr6p3ptgnzery32bjc9zzudo
193449
193448
2024-11-21T10:32:18Z
Lee
19
[[:en:Module:template_parser/data]] වෙතින් එක් සංශෝධනයක්
193448
Scribunto
text/plain
local string = string
local gsub = string.gsub
local upper = string.upper
local data = {}
do
local tags = mw.loadData("Module:data/parser extension tags")
local data_end_tags = {}
-- The preprocessor uses the regex "/<\/TAG\s*>/i", so only ASCII characters
-- are case-insensitive.
local function char_pattern(ch)
local upper_ch = upper(ch)
return upper_ch == ch and ch or "[" .. upper(ch) .. ch .. "]"
end
-- Generates the string pattern for the end tag.
local function end_tag_pattern(tag)
data_end_tags[tag] = "</" .. gsub(tag, "[^\128-\255]", char_pattern) .. "%s*>"
end
for tag in pairs(tags) do
end_tag_pattern(tag)
end
end_tag_pattern("includeonly")
end_tag_pattern("noinclude")
data_end_tags["onlyinclude"] = true -- Pattern is not required, but a key is needed for tag validity checks.
data.end_tags = data_end_tags
end
-- Character escapes from PHP's htmlspecialchars.
data.php_htmlspecialchars = {
["\""] = """,
["&"] = "&",
["'"] = "'",
["<"] = "<",
[">"] = ">",
}
-- The parser's HTML sanitizer validates tag attributes with the regex
-- "/^([:_\p{L}\p{N}][:_\.\-\p{L}\p{N}]*)$/sxu". Ustring's "%w" is defined as
-- "[\p{L}\p{Nd}]", so any characters in \p{N} but not \p{Nd} must be added
-- manually.
-- NOTE: \p{N} *MUST* be defined according to the same version of Unicode that
-- the sanitizer uses in order to remain in sync. As of September 2024, this is
-- version 11.0.
local N_not_Nd = "\194\178" .. -- U+00B2
"\194\179" .. -- U+00B3
"\194\185" .. -- U+00B9
"\194\188-\194\190" .. -- U+00BC-U+00BE
"\224\167\180-\224\167\185" .. -- U+09F4-U+09F9
"\224\173\178-\224\173\183" .. -- U+0B72-U+0B77
"\224\175\176-\224\175\178" .. -- U+0BF0-U+0BF2
"\224\177\184-\224\177\190" .. -- U+0C78-U+0C7E
"\224\181\152-\224\181\158" .. -- U+0D58-U+0D5E
"\224\181\176-\224\181\184" .. -- U+0D70-U+0D78
"\224\188\170-\224\188\179" .. -- U+0F2A-U+0F33
"\225\141\169-\225\141\188" .. -- U+1369-U+137C
"\225\155\174-\225\155\176" .. -- U+16EE-U+16F0
"\225\159\176-\225\159\185" .. -- U+17F0-U+17F9
"\225\167\154" .. -- U+19DA
"\226\129\176" .. -- U+2070
"\226\129\180-\226\129\185" .. -- U+2074-U+2079
"\226\130\128-\226\130\137" .. -- U+2080-U+2089
"\226\133\144-\226\134\130" .. -- U+2150-U+2182
"\226\134\133-\226\134\137" .. -- U+2185-U+2189
"\226\145\160-\226\146\155" .. -- U+2460-U+249B
"\226\147\170-\226\147\191" .. -- U+24EA-U+24FF
"\226\157\182-\226\158\147" .. -- U+2776-U+2793
"\226\179\189" .. -- U+2CFD
"\227\128\135" .. -- U+3007
"\227\128\161-\227\128\169" .. -- U+3021-U+3029
"\227\128\184-\227\128\186" .. -- U+3038-U+303A
"\227\134\146-\227\134\149" .. -- U+3192-U+3195
"\227\136\160-\227\136\169" .. -- U+3220-U+3229
"\227\137\136-\227\137\143" .. -- U+3248-U+324F
"\227\137\145-\227\137\159" .. -- U+3251-U+325F
"\227\138\128-\227\138\137" .. -- U+3280-U+3289
"\227\138\177-\227\138\191" .. -- U+32B1-U+32BF
"\234\155\166-\234\155\175" .. -- U+A6E6-U+A6EF
"\234\160\176-\234\160\181" .. -- U+A830-U+A835
"\240\144\132\135-\240\144\132\179" .. -- U+10107-U+10133
"\240\144\133\128-\240\144\133\184" .. -- U+10140-U+10178
"\240\144\134\138" .. -- U+1018A
"\240\144\134\139" .. -- U+1018B
"\240\144\139\161-\240\144\139\187" .. -- U+102E1-U+102FB
"\240\144\140\160-\240\144\140\163" .. -- U+10320-U+10323
"\240\144\141\129" .. -- U+10341
"\240\144\141\138" .. -- U+1034A
"\240\144\143\145-\240\144\143\149" .. -- U+103D1-U+103D5
"\240\144\161\152-\240\144\161\159" .. -- U+10858-U+1085F
"\240\144\161\185-\240\144\161\191" .. -- U+10879-U+1087F
"\240\144\162\167-\240\144\162\175" .. -- U+108A7-U+108AF
"\240\144\163\187-\240\144\163\191" .. -- U+108FB-U+108FF
"\240\144\164\150-\240\144\164\155" .. -- U+10916-U+1091B
"\240\144\166\188" .. -- U+109BC
"\240\144\166\189" .. -- U+109BD
"\240\144\167\128-\240\144\167\143" .. -- U+109C0-U+109CF
"\240\144\167\146-\240\144\167\191" .. -- U+109D2-U+109FF
"\240\144\169\128-\240\144\169\136" .. -- U+10A40-U+10A48
"\240\144\169\189" .. -- U+10A7D
"\240\144\169\190" .. -- U+10A7E
"\240\144\170\157-\240\144\170\159" .. -- U+10A9D-U+10A9F
"\240\144\171\171-\240\144\171\175" .. -- U+10AEB-U+10AEF
"\240\144\173\152-\240\144\173\159" .. -- U+10B58-U+10B5F
"\240\144\173\184-\240\144\173\191" .. -- U+10B78-U+10B7F
"\240\144\174\169-\240\144\174\175" .. -- U+10BA9-U+10BAF
"\240\144\179\186-\240\144\179\191" .. -- U+10CFA-U+10CFF
"\240\144\185\160-\240\144\185\190" .. -- U+10E60-U+10E7E
"\240\144\188\157-\240\144\188\166" .. -- U+10F1D-U+10F26
"\240\144\189\145-\240\144\189\148" .. -- U+10F51-U+10F54
"\240\145\129\146-\240\145\129\165" .. -- U+11052-U+11065
"\240\145\135\161-\240\145\135\180" .. -- U+111E1-U+111F4
"\240\145\156\186" .. -- U+1173A
"\240\145\156\187" .. -- U+1173B
"\240\145\163\170-\240\145\163\178" .. -- U+118EA-U+118F2
"\240\145\177\154-\240\145\177\172" .. -- U+11C5A-U+11C6C
"\240\146\144\128-\240\146\145\174" .. -- U+12400-U+1246E
"\240\150\173\155-\240\150\173\161" .. -- U+16B5B-U+16B61
"\240\150\186\128-\240\150\186\150" .. -- U+16E80-U+16E96
"\240\157\139\160-\240\157\139\179" .. -- U+1D2E0-U+1D2F3
"\240\157\141\160-\240\157\141\184" .. -- U+1D360-U+1D378
"\240\158\163\135-\240\158\163\143" .. -- U+1E8C7-U+1E8CF
"\240\158\177\177-\240\158\178\171" .. -- U+1EC71-U+1ECAB
"\240\158\178\173-\240\158\178\175" .. -- U+1ECAD-U+1ECAF
"\240\158\178\177-\240\158\178\180" .. -- U+1ECB1-U+1ECB4
"\240\159\132\128-\240\159\132\140" -- U+1F100-U+1F10C
data.valid_attribute_name = "^[:_%w" .. N_not_Nd .."][:_.%-%w" .. N_not_Nd .. "]*$"
-- Value is the namespace number of the linked page at parameter 0, where 0 is mainspace.
-- If the namespace is the mainspace, it can be overridden by an explicitly specified category (e.g. {{PAGENAME:Category:Foo}} refers to "Category:Foo"). This does not apply to any other namespace (e.g. {{#SPECIAL:Category:Foo}} refers to "Special:Category:Foo").
data.template_link_param_1 = {
["#CATEGORYTREE:"] = 14, -- Category:
["#IFEXIST:"] = 0,
["#INVOKE:"] = 828, -- Module:
["#LST:"] = 0,
["#LSTH:"] = 0,
["#LSTX:"] = 0,
["#SPECIAL:"] = -1, -- Special:
["#SPECIALE:"] = -1, -- Special:
["#TITLEPARTS:"] = 0,
["BASEPAGENAME:"] = 0,
["BASEPAGENAMEE:"] = 0,
["CANONICALURL:"] = 0,
["CANONICALURLE:"] = 0,
["CASCADINGSOURCES:"] = 0,
["FILEPATH:"] = 6, -- File:
["FULLPAGENAME:"] = 0,
["FULLPAGENAMEE:"] = 0,
["FULLURL:"] = 0,
["FULLURLE:"] = 0,
["INT:"] = 8, -- MediaWiki:
["LOCALURL:"] = 0,
["LOCALURLE:"] = 0,
["NAMESPACE:"] = 0,
["NAMESPACEE:"] = 0,
["NAMESPACENUMBER:"] = 0,
["PAGEID:"] = 0,
["PAGENAME:"] = 0,
["PAGENAMEE:"] = 0,
["PAGESINCATEGORY:"] = 14, -- Category:
["PAGESIZE:"] = 0,
["REVISIONDAY:"] = 0,
["REVISIONDAY2:"] = 0,
["REVISIONID:"] = 0,
["REVISIONMONTH:"] = 0,
["REVISIONMONTH1:"] = 0,
["REVISIONTIMESTAMP:"] = 0,
["REVISIONUSER:"] = 0,
["REVISIONYEAR:"] = 0,
["ROOTPAGENAME:"] = 0,
["ROOTPAGENAMEE:"] = 0,
["SUBJECTPAGENAME:"] = 0,
["SUBJECTPAGENAMEE:"] = 0,
["SUBJECTSPACE:"] = 0,
["SUBJECTSPACEE:"] = 0,
["SUBPAGENAME:"] = 0,
["SUBPAGENAMEE:"] = 0,
["TALKPAGENAME:"] = 0,
["TALKPAGENAMEE:"] = 0,
["TALKSPACE:"] = 0,
["TALKSPACEE:"] = 0,
}
-- Value is the namespace number of the linked page at parameter 1.
data.template_link_param_2 = {
["PROTECTIONEXPIRY:"] = 0,
["PROTECTIONLEVEL:"] = 0,
}
return data
k8hhdejnr6p3ptgnzery32bjc9zzudo
Module:template parser/documentation
828
15138
193450
182566
2024-11-16T12:40:46Z
en>Theknightwho
0
193450
wikitext
text/x-wiki
{{documentation outdated}}
This module provides functions for parsing and finding template invocations found in wikitext.
; {{code|lua|parseTemplate(text, not_transcluded)}}
: Parses <tt>text</tt> as a template invocation and returns a pair of values, the template name and the arguments (containing anonymous, numbered and named arguments). If the text could not be parsed as a template invocation, the function returns <tt>nil</tt>. The parser will correctly parse any wikitext given as template arguments (such as subtemplates, arguments, tables etc), but if the string does not form a valid template in markup, then it will return <code>nil</code>.
; {{code|lua|findTemplates(text, not_transcluded)}}
: Finds all template invocations in the text. This is designed to be used as an iterator in <tt>for</tt> statements, and returns four values for each invocation:
# The template name.
# The template arguments.
# The full template invocation as it appears in the original text.
# The index the template appears at within the given text; as with Lua in general, the beginning of the text is index 1.
For convenience, template names will be normalized in two ways:
# They are preprocessed, which means that any templates ({{tl| }}) and parameters ({{param| }}) they contain will be resolved.
# Any redirects will be converted to their canonical equivalents (e.g. {{tl|l}} is treated as {{tl|link}}).
Note that any templates with invalid names (after preprocessing) will be skipped over. For performance reasons, preprocessing is only applied to the keys in a template's table of arguments, so it should be applied (selectively) to the values by the calling module when needed.
Note that the parser will respect {{wt|noinclude}}, {{wt|includeonly}} and {{wt|onlyinclude}} tags. By default, <tt>text</tt> is treated as though it has been [[mw:Transclusion|transcluded]], which means that text between {{wt|noinclude}} tags will be ignored, and {{wt|onlyinclude}} tags will be respected if present. If the parameter <tt>not_transcluded</tt> is set to {{code|lua|true}}, then <tt>text</tt> will be treated as though it has not been transcluded, which means text between {{wt|includeonly}} tags will be ignored instead.
Although the parser is very accurate, some discrepancies may still exist between it and the native parser in certain cases.
<includeonly>
{{module cat|-|Wikitext parsing}}
</includeonly>
7la15fluinz88mnu350p2tc5d8i8nly
193451
193450
2024-11-21T10:32:35Z
Lee
19
[[:en:Module:template_parser/documentation]] වෙතින් එක් සංශෝධනයක්
193450
wikitext
text/x-wiki
{{documentation outdated}}
This module provides functions for parsing and finding template invocations found in wikitext.
; {{code|lua|parseTemplate(text, not_transcluded)}}
: Parses <tt>text</tt> as a template invocation and returns a pair of values, the template name and the arguments (containing anonymous, numbered and named arguments). If the text could not be parsed as a template invocation, the function returns <tt>nil</tt>. The parser will correctly parse any wikitext given as template arguments (such as subtemplates, arguments, tables etc), but if the string does not form a valid template in markup, then it will return <code>nil</code>.
; {{code|lua|findTemplates(text, not_transcluded)}}
: Finds all template invocations in the text. This is designed to be used as an iterator in <tt>for</tt> statements, and returns four values for each invocation:
# The template name.
# The template arguments.
# The full template invocation as it appears in the original text.
# The index the template appears at within the given text; as with Lua in general, the beginning of the text is index 1.
For convenience, template names will be normalized in two ways:
# They are preprocessed, which means that any templates ({{tl| }}) and parameters ({{param| }}) they contain will be resolved.
# Any redirects will be converted to their canonical equivalents (e.g. {{tl|l}} is treated as {{tl|link}}).
Note that any templates with invalid names (after preprocessing) will be skipped over. For performance reasons, preprocessing is only applied to the keys in a template's table of arguments, so it should be applied (selectively) to the values by the calling module when needed.
Note that the parser will respect {{wt|noinclude}}, {{wt|includeonly}} and {{wt|onlyinclude}} tags. By default, <tt>text</tt> is treated as though it has been [[mw:Transclusion|transcluded]], which means that text between {{wt|noinclude}} tags will be ignored, and {{wt|onlyinclude}} tags will be respected if present. If the parameter <tt>not_transcluded</tt> is set to {{code|lua|true}}, then <tt>text</tt> will be treated as though it has not been transcluded, which means text between {{wt|includeonly}} tags will be ignored instead.
Although the parser is very accurate, some discrepancies may still exist between it and the native parser in certain cases.
<includeonly>
{{module cat|-|Wikitext parsing}}
</includeonly>
7la15fluinz88mnu350p2tc5d8i8nly
Module:template parser/testcases
828
15140
193452
164459
2024-10-07T02:40:44Z
en>Theknightwho
0
Use parse, not parseTemplate, and preprocess examples, since the template parser now does that.
193452
Scribunto
text/plain
local tests = require "Module:UnitTests"
local highlight = require("Module:debug").highlight{ lang = "text"}
local parse = require("Module:template parser").parse
function tests:check_templates(examples)
local options = { nowiki = true }
tests:iterate(examples, function(self, wikitext, expected)
local template = parse(wikitext)
self:equals_deep(highlight(wikitext), {template:get_name(), template:get_arguments()}, expected, options)
end)
end
function tests:test_basic()
local examples = {
{
"{{l|en|word}}",
{ "link", { "en", "word" } },
},
{
"{{t|cmn|大老二|tr={{m|cmn|dà lǎo èr}}}}",
{ "t", { "cmn", "大老二", tr = "{{m|cmn|dà lǎo èr}}" } },
},
{
"{{t|akk|𒁀|tr=[[Image:B014ellst.png|30px]] qiāšu, BA}}",
{ "t", { "akk", "𒁀", tr = "[[Image:B014ellst.png|30px]] qiāšu, BA" } },
}
}
local frame = mw.getCurrentFrame()
for i, example in ipairs(examples) do
local args, new = example[2][2], {}
for k, v in pairs(args) do
k = type(k) == "string" and frame:preprocess(k) or k
new[k] = frame:preprocess(v)
end
example[2][2] = new
end
self:check_templates(examples)
end
function tests:test_whitespace()
self:check_templates {
{ "{{l| en | word\n}}", { "link", { " en ", " word\n" } } },
{ "{{l| en | 2 = word\n}}", { "link", { " en ", "word" } } },
{ "{{l| 1 = en | word\n}}", { "link", { " word\n" } } },
}
end
return tests
08lnctsz14ippy4dzr4ot8prkm567no
193453
193452
2024-11-21T10:33:17Z
Lee
19
[[:en:Module:template_parser/testcases]] වෙතින් එක් සංශෝධනයක්
193452
Scribunto
text/plain
local tests = require "Module:UnitTests"
local highlight = require("Module:debug").highlight{ lang = "text"}
local parse = require("Module:template parser").parse
function tests:check_templates(examples)
local options = { nowiki = true }
tests:iterate(examples, function(self, wikitext, expected)
local template = parse(wikitext)
self:equals_deep(highlight(wikitext), {template:get_name(), template:get_arguments()}, expected, options)
end)
end
function tests:test_basic()
local examples = {
{
"{{l|en|word}}",
{ "link", { "en", "word" } },
},
{
"{{t|cmn|大老二|tr={{m|cmn|dà lǎo èr}}}}",
{ "t", { "cmn", "大老二", tr = "{{m|cmn|dà lǎo èr}}" } },
},
{
"{{t|akk|𒁀|tr=[[Image:B014ellst.png|30px]] qiāšu, BA}}",
{ "t", { "akk", "𒁀", tr = "[[Image:B014ellst.png|30px]] qiāšu, BA" } },
}
}
local frame = mw.getCurrentFrame()
for i, example in ipairs(examples) do
local args, new = example[2][2], {}
for k, v in pairs(args) do
k = type(k) == "string" and frame:preprocess(k) or k
new[k] = frame:preprocess(v)
end
example[2][2] = new
end
self:check_templates(examples)
end
function tests:test_whitespace()
self:check_templates {
{ "{{l| en | word\n}}", { "link", { " en ", " word\n" } } },
{ "{{l| en | 2 = word\n}}", { "link", { " en ", "word" } } },
{ "{{l| 1 = en | word\n}}", { "link", { " word\n" } } },
}
end
return tests
08lnctsz14ippy4dzr4ot8prkm567no
Module:parser
828
15148
193469
183606
2024-10-04T13:24:23Z
en>Theknightwho
0
Remodel iterators so that it's possible to edit a node tree synchronously with the parse, instead of being committed to a node as soon as it is reached. Also remove the need to call select on each iteration.
193469
Scribunto
text/plain
local export = {}
local concat = table.concat
local deepcopy -- Assigned when needed.
local getmetatable = getmetatable
local insert = table.insert
local next = next
local rawget = rawget
local rawset = rawset
local remove = table.remove
local setmetatable = setmetatable
local type = type
local unpack = unpack
local classes = {}
local metamethods = mw.loadData("Module:data/metamethods")
------------------------------------------------------------------------------------
--
-- Helper functions
--
------------------------------------------------------------------------------------
local function get_nested(t, k, ...)
if t == nil then
return nil
elseif ... == nil then
return t[k]
end
return get_nested(t[k], ...)
end
local function set_nested(t, k, v, ...)
if ... ~= nil then
local t_next = t[k]
if t_next == nil then
t_next = {}
t[k] = t_next
end
return set_nested(t_next, v, ...)
end
t[k] = v
end
local function inherit_metamethods(child, parent)
if parent then
for method, value in next, parent do
if child[method] == nil and metamethods[method] ~= nil then
child[method] = value
end
end
end
return child
end
local function signed_index(t, n)
return n and n <= 0 and #t + 1 + n or n
end
local function is_node(value)
return classes[getmetatable(value)] ~= nil
end
-- Recursively calling tostring() adds to the C stack (limit: 200), whereas
-- calling __tostring metamethods directly does not. Occasionally relevant when
-- dealing with very deep nesting.
local tostring
do
local _tostring = _G.tostring
function tostring(value)
if is_node(value) then
return value:__tostring(value)
end
return _tostring(value)
end
end
local function class_else_type(value)
local class = classes[getmetatable(value)]
if class ~= nil then
return class
end
return type(value)
end
------------------------------------------------------------------------------------
--
-- Nodes
--
------------------------------------------------------------------------------------
local Node = {}
Node.__index = Node
function Node:next(i)
i = i + 1
return self[i], self, i
end
function Node:next_node(i)
local v
repeat
v, self, i = self:next(i)
until v == nil or is_node(v)
return v, self, i
end
-- Implements recursive iteration over a node tree, using functors to maintain state (which uses a lot less memory than closures). Iterator1 exists only to return the calling node on the first iteration, while Iterator2 uses a stack to store the state of each layer in the tree.
-- When a node is encountered (which may contain other nodes), it is returned on the first iteration, and then any child nodes are returned on each subsequent iteration; the same process is followed if any of those children contain nodes themselves. Once a particular node has been fully traversed, the iterator moves back up one layer and continues with any sibling nodes.
-- Each iteration returns three values: `value`, `node` and `key`. Together, these can be used to manipulate the node tree at any given point without needing to know the full structure. Note that when the input node is returned on the first iteration, `node` and `key` will be nil.
-- By default, the iterator will use the `next` method of each node, but this can be changed with the `next_func` parameter, which accepts a string argument with the name of a next method. This is because trees might consist of several different classes of node, and each might have different next methods that are tailored to their particular structures. In addition, each class of node might have multiple different next methods, which can be named according to their purposes. `next_func` ensures that the iterator uses equivalent next methods between different types of node.
-- Currently, two next methods are available: `next`, which simply iterates over the node conventionally, and `next_node`, which only returns children that are themselves nodes. Custom next methods can be declared by any calling module.
do
local Iterator1, Iterator2 = {}, {}
Iterator1.__index = Iterator2 -- Not a typo.
Iterator2.__index = Iterator2
function Iterator1:__call()
setmetatable(self, Iterator2)
return self[1].node
end
function Iterator2:push(node)
local layer = {
k = 0,
node = node
}
self[#self + 1] = layer
self[-1] = layer
return self
end
function Iterator2:pop()
local len = #self
self[len] = nil
self[-1] = self[len - 1]
end
function Iterator2:iterate(layer, ...)
local v, node, k = ...
if v ~= nil then
layer.k = k
return ...
end
self:pop()
layer = self[-1]
if layer ~= nil then
node = layer.node
return self:iterate(layer, node[self.next_func](node, layer.k))
end
end
function Iterator2:__call()
local layer = self[-1]
local node, k = layer.node, layer.k
local curr_val = node[k]
if is_node(curr_val) then
self:push(curr_val)
layer = self[-1]
node, k = layer.node, layer.k
end
return self:iterate(layer, node[self.next_func](node, k))
end
function Node:__pairs(next_func)
return setmetatable({
next_func = next_func == nil and "next" or next_func
}, Iterator1):push(self)
end
end
function Node:rawpairs()
return next, self
end
function Node:__tostring()
local output = {}
for i = 1, #self do
insert(output, tostring(self[i]))
end
return concat(output)
end
function Node:clone()
if not deepcopy then
deepcopy = require("Module:table").deepcopy
end
return deepcopy(self, "keep", true)
end
function Node:new_class(class)
local t = {type = class}
t.__index = t
t = inherit_metamethods(t, self)
classes[t] = class
return setmetatable(t, self)
end
Node.keys_to_remove = {"fail", "handler", "head", "override", "route"}
function Node:new(t)
setmetatable(t, nil)
local keys_to_remove = self.keys_to_remove
for i = 1, #keys_to_remove do
t[keys_to_remove[i]] = nil
end
return setmetatable(t, self)
end
do
local Proxy = {}
function Proxy:__index(k)
local v = Proxy[k]
if v ~= nil then
return v
end
return self.__chars[k]
end
function Proxy:__newindex(k, v)
local key = self.__keys[k]
if key then
self.__chars[k] = v
self.__parents[key] = v
elseif key == false then
error("Character is immutable.")
else
error("Invalid key.")
end
end
function Proxy:build(a, b, c)
local len = self.__len + 1
self.__chars[len] = a
self.__parents[len] = b
self.__keys[len] = c
self.__len = len
end
function Proxy:iter(i)
i = i + 1
local char = self.__chars[i]
if char ~= nil then
return i, self[i], self, self.__parents[i], self.__keys[i]
end
end
function Node:new_proxy()
return setmetatable({
__node = self,
__chars = {},
__parents = {},
__keys = {},
__len = 0
}, Proxy)
end
end
------------------------------------------------------------------------------------
--
-- Parser
--
------------------------------------------------------------------------------------
local Parser = {}
Parser.__index = Parser
function Parser:read(delta)
local v = self.text[self.head + (delta or 0)]
return v == nil and "" or v
end
function Parser:advance(n)
self.head = self.head + (n == nil and 1 or n)
end
function Parser:layer(n)
if n ~= nil then
return rawget(self, #self + n)
end
return self[-1]
end
function Parser:emit(a, b)
local layer = self[-1]
if b ~= nil then
insert(layer, signed_index(layer, a), b)
else
rawset(layer, #layer + 1, a)
end
end
function Parser:emit_tokens(a, b)
local layer = self[-1]
if b ~= nil then
a = signed_index(layer, a)
for i = 1, #b do
insert(layer, a + i - 1, b[i])
end
else
local len = #layer
for i = 1, #a do
len = len + 1
rawset(layer, len, a[i])
end
end
end
function Parser:remove(n)
local layer = self[-1]
if n ~= nil then
return remove(layer, signed_index(layer, n))
end
local len = #layer
local token = layer[len]
layer[len] = nil
return token
end
function Parser:replace(a, b)
local layer = self[-1]
layer[signed_index(layer, a)] = b
end
-- Unlike default table.concat, this respects __tostring metamethods.
function Parser:concat(a, b, c)
if a == nil or a > 0 then
return self:concat(0, a, b)
end
local layer, ret, n = self:layer(a), {}, 0
for i = b and signed_index(layer, b) or 1, c and signed_index(layer, c) or #layer do
n = n + 1
ret[n] = tostring(layer[i])
end
return concat(ret)
end
function Parser:emitted(delta)
if delta == nil then
delta = -1
end
local i = 0
while true do
local layer = self:layer(i)
if layer == nil then
return nil
end
local layer_len = #layer
if -delta <= layer_len then
return rawget(layer, layer_len + delta + 1)
end
delta = delta + layer_len
i = i - 1
end
end
function Parser:push(route)
local layer = {
head = self.head,
route = route
}
self[#self + 1] = layer
self[-1] = layer
end
function Parser:push_sublayer(handler, inherit)
local sublayer = {
handler = handler,
sublayer = true
}
if inherit then
local layer = self[-1]
setmetatable(sublayer, inherit_metamethods({
__index = layer,
__newindex = layer
}, getmetatable(layer)))
end
self[#self + 1] = sublayer
self[-1] = sublayer
end
function Parser:pop()
local len, layer = #self
while true do
layer = self[len]
self[len] = nil
len = len - 1
local new = self[len]
self[-1] = new == nil and self or new
if layer.sublayer == nil then
break
end
self:emit_tokens(layer)
end
return layer
end
function Parser:pop_sublayer()
local len, layer = #self, self[-1]
self[len] = nil
local new = self[len - 1]
self[-1] = new == nil and self or new
setmetatable(layer, nil)
layer.sublayer = nil
return layer
end
function Parser:get(route, ...)
self:push(route)
local layer = route(self, ...)
if layer == nil then
layer = self:traverse()
end
return layer
end
function Parser:try(route, ...)
local failed_layer = get_nested(self.failed_routes, route, self.head)
if failed_layer ~= nil then
return false, failed_layer
end
local layer = self:get(route, ...)
return not layer.fail, layer
end
function Parser:consume(this, ...)
local layer = self[-1]
if this == nil then
this = self:read()
end
return (layer.override or layer.handler)(self, this, ...)
end
function Parser:fail_route()
local layer = self:pop()
layer.fail = true
set_nested(self, "failed_routes", layer.route, layer.head, layer)
self.head = layer.head
return layer
end
function Parser:traverse()
while true do
local layer = self:consume()
if layer ~= nil then
return layer
end
self:advance()
end
end
-- Converts a handler into a switch table the first time it's called, which avoids creating unnecessary objects, and prevents any scoping issues caused by parser methods being assigned to table keys before they've been declared.
-- false is used as the default key.
do
local Switch = {}
function Switch:__call(parser, this)
return (self[this] or self[false])(parser, this)
end
function Parser:switch(func, t)
local layer = self[-1]
-- Point handler to the new switch table if the calling function is the current handler.
if layer.handler == func then
layer.handler = t
end
return setmetatable(t, Switch)
end
end
-- Generate a new parser class object, which is used as the template for any parser objects. These should be customized with additional/modified methods as needed.
function Parser:new_class()
local t = {}
t.__index = t
return setmetatable(inherit_metamethods(t, self), self)
end
-- Generate a new parser object, which is used for a specific parse.
function Parser:new(text)
return setmetatable({
text = text,
head = 1
}, self)
end
function Parser:parse(data)
local parser = self:new(data.text)
local success, tokens = parser:try(unpack(data.route))
if #parser > 0 then
-- This shouldn't happen.
error("Parser exited with non-empty stack.")
elseif success then
local node = data.node
return true, node[1]:new(tokens, unpack(node, 2)), parser
elseif data.allow_fail then
return false, nil, parser
end
error("Parser exited with failed route.")
end
export.class_else_type = class_else_type
export.is_node = is_node
export.tostring = tostring
function export.new()
return Parser:new_class(), Node:new_class("node")
end
return export
5doe3lkc4ta9zlu34tmahwx9wo5wdxx
193470
193469
2024-11-21T10:40:30Z
Lee
19
[[:en:Module:parser]] වෙතින් එක් සංශෝධනයක්
193469
Scribunto
text/plain
local export = {}
local concat = table.concat
local deepcopy -- Assigned when needed.
local getmetatable = getmetatable
local insert = table.insert
local next = next
local rawget = rawget
local rawset = rawset
local remove = table.remove
local setmetatable = setmetatable
local type = type
local unpack = unpack
local classes = {}
local metamethods = mw.loadData("Module:data/metamethods")
------------------------------------------------------------------------------------
--
-- Helper functions
--
------------------------------------------------------------------------------------
local function get_nested(t, k, ...)
if t == nil then
return nil
elseif ... == nil then
return t[k]
end
return get_nested(t[k], ...)
end
local function set_nested(t, k, v, ...)
if ... ~= nil then
local t_next = t[k]
if t_next == nil then
t_next = {}
t[k] = t_next
end
return set_nested(t_next, v, ...)
end
t[k] = v
end
local function inherit_metamethods(child, parent)
if parent then
for method, value in next, parent do
if child[method] == nil and metamethods[method] ~= nil then
child[method] = value
end
end
end
return child
end
local function signed_index(t, n)
return n and n <= 0 and #t + 1 + n or n
end
local function is_node(value)
return classes[getmetatable(value)] ~= nil
end
-- Recursively calling tostring() adds to the C stack (limit: 200), whereas
-- calling __tostring metamethods directly does not. Occasionally relevant when
-- dealing with very deep nesting.
local tostring
do
local _tostring = _G.tostring
function tostring(value)
if is_node(value) then
return value:__tostring(value)
end
return _tostring(value)
end
end
local function class_else_type(value)
local class = classes[getmetatable(value)]
if class ~= nil then
return class
end
return type(value)
end
------------------------------------------------------------------------------------
--
-- Nodes
--
------------------------------------------------------------------------------------
local Node = {}
Node.__index = Node
function Node:next(i)
i = i + 1
return self[i], self, i
end
function Node:next_node(i)
local v
repeat
v, self, i = self:next(i)
until v == nil or is_node(v)
return v, self, i
end
-- Implements recursive iteration over a node tree, using functors to maintain state (which uses a lot less memory than closures). Iterator1 exists only to return the calling node on the first iteration, while Iterator2 uses a stack to store the state of each layer in the tree.
-- When a node is encountered (which may contain other nodes), it is returned on the first iteration, and then any child nodes are returned on each subsequent iteration; the same process is followed if any of those children contain nodes themselves. Once a particular node has been fully traversed, the iterator moves back up one layer and continues with any sibling nodes.
-- Each iteration returns three values: `value`, `node` and `key`. Together, these can be used to manipulate the node tree at any given point without needing to know the full structure. Note that when the input node is returned on the first iteration, `node` and `key` will be nil.
-- By default, the iterator will use the `next` method of each node, but this can be changed with the `next_func` parameter, which accepts a string argument with the name of a next method. This is because trees might consist of several different classes of node, and each might have different next methods that are tailored to their particular structures. In addition, each class of node might have multiple different next methods, which can be named according to their purposes. `next_func` ensures that the iterator uses equivalent next methods between different types of node.
-- Currently, two next methods are available: `next`, which simply iterates over the node conventionally, and `next_node`, which only returns children that are themselves nodes. Custom next methods can be declared by any calling module.
do
local Iterator1, Iterator2 = {}, {}
Iterator1.__index = Iterator2 -- Not a typo.
Iterator2.__index = Iterator2
function Iterator1:__call()
setmetatable(self, Iterator2)
return self[1].node
end
function Iterator2:push(node)
local layer = {
k = 0,
node = node
}
self[#self + 1] = layer
self[-1] = layer
return self
end
function Iterator2:pop()
local len = #self
self[len] = nil
self[-1] = self[len - 1]
end
function Iterator2:iterate(layer, ...)
local v, node, k = ...
if v ~= nil then
layer.k = k
return ...
end
self:pop()
layer = self[-1]
if layer ~= nil then
node = layer.node
return self:iterate(layer, node[self.next_func](node, layer.k))
end
end
function Iterator2:__call()
local layer = self[-1]
local node, k = layer.node, layer.k
local curr_val = node[k]
if is_node(curr_val) then
self:push(curr_val)
layer = self[-1]
node, k = layer.node, layer.k
end
return self:iterate(layer, node[self.next_func](node, k))
end
function Node:__pairs(next_func)
return setmetatable({
next_func = next_func == nil and "next" or next_func
}, Iterator1):push(self)
end
end
function Node:rawpairs()
return next, self
end
function Node:__tostring()
local output = {}
for i = 1, #self do
insert(output, tostring(self[i]))
end
return concat(output)
end
function Node:clone()
if not deepcopy then
deepcopy = require("Module:table").deepcopy
end
return deepcopy(self, "keep", true)
end
function Node:new_class(class)
local t = {type = class}
t.__index = t
t = inherit_metamethods(t, self)
classes[t] = class
return setmetatable(t, self)
end
Node.keys_to_remove = {"fail", "handler", "head", "override", "route"}
function Node:new(t)
setmetatable(t, nil)
local keys_to_remove = self.keys_to_remove
for i = 1, #keys_to_remove do
t[keys_to_remove[i]] = nil
end
return setmetatable(t, self)
end
do
local Proxy = {}
function Proxy:__index(k)
local v = Proxy[k]
if v ~= nil then
return v
end
return self.__chars[k]
end
function Proxy:__newindex(k, v)
local key = self.__keys[k]
if key then
self.__chars[k] = v
self.__parents[key] = v
elseif key == false then
error("Character is immutable.")
else
error("Invalid key.")
end
end
function Proxy:build(a, b, c)
local len = self.__len + 1
self.__chars[len] = a
self.__parents[len] = b
self.__keys[len] = c
self.__len = len
end
function Proxy:iter(i)
i = i + 1
local char = self.__chars[i]
if char ~= nil then
return i, self[i], self, self.__parents[i], self.__keys[i]
end
end
function Node:new_proxy()
return setmetatable({
__node = self,
__chars = {},
__parents = {},
__keys = {},
__len = 0
}, Proxy)
end
end
------------------------------------------------------------------------------------
--
-- Parser
--
------------------------------------------------------------------------------------
local Parser = {}
Parser.__index = Parser
function Parser:read(delta)
local v = self.text[self.head + (delta or 0)]
return v == nil and "" or v
end
function Parser:advance(n)
self.head = self.head + (n == nil and 1 or n)
end
function Parser:layer(n)
if n ~= nil then
return rawget(self, #self + n)
end
return self[-1]
end
function Parser:emit(a, b)
local layer = self[-1]
if b ~= nil then
insert(layer, signed_index(layer, a), b)
else
rawset(layer, #layer + 1, a)
end
end
function Parser:emit_tokens(a, b)
local layer = self[-1]
if b ~= nil then
a = signed_index(layer, a)
for i = 1, #b do
insert(layer, a + i - 1, b[i])
end
else
local len = #layer
for i = 1, #a do
len = len + 1
rawset(layer, len, a[i])
end
end
end
function Parser:remove(n)
local layer = self[-1]
if n ~= nil then
return remove(layer, signed_index(layer, n))
end
local len = #layer
local token = layer[len]
layer[len] = nil
return token
end
function Parser:replace(a, b)
local layer = self[-1]
layer[signed_index(layer, a)] = b
end
-- Unlike default table.concat, this respects __tostring metamethods.
function Parser:concat(a, b, c)
if a == nil or a > 0 then
return self:concat(0, a, b)
end
local layer, ret, n = self:layer(a), {}, 0
for i = b and signed_index(layer, b) or 1, c and signed_index(layer, c) or #layer do
n = n + 1
ret[n] = tostring(layer[i])
end
return concat(ret)
end
function Parser:emitted(delta)
if delta == nil then
delta = -1
end
local i = 0
while true do
local layer = self:layer(i)
if layer == nil then
return nil
end
local layer_len = #layer
if -delta <= layer_len then
return rawget(layer, layer_len + delta + 1)
end
delta = delta + layer_len
i = i - 1
end
end
function Parser:push(route)
local layer = {
head = self.head,
route = route
}
self[#self + 1] = layer
self[-1] = layer
end
function Parser:push_sublayer(handler, inherit)
local sublayer = {
handler = handler,
sublayer = true
}
if inherit then
local layer = self[-1]
setmetatable(sublayer, inherit_metamethods({
__index = layer,
__newindex = layer
}, getmetatable(layer)))
end
self[#self + 1] = sublayer
self[-1] = sublayer
end
function Parser:pop()
local len, layer = #self
while true do
layer = self[len]
self[len] = nil
len = len - 1
local new = self[len]
self[-1] = new == nil and self or new
if layer.sublayer == nil then
break
end
self:emit_tokens(layer)
end
return layer
end
function Parser:pop_sublayer()
local len, layer = #self, self[-1]
self[len] = nil
local new = self[len - 1]
self[-1] = new == nil and self or new
setmetatable(layer, nil)
layer.sublayer = nil
return layer
end
function Parser:get(route, ...)
self:push(route)
local layer = route(self, ...)
if layer == nil then
layer = self:traverse()
end
return layer
end
function Parser:try(route, ...)
local failed_layer = get_nested(self.failed_routes, route, self.head)
if failed_layer ~= nil then
return false, failed_layer
end
local layer = self:get(route, ...)
return not layer.fail, layer
end
function Parser:consume(this, ...)
local layer = self[-1]
if this == nil then
this = self:read()
end
return (layer.override or layer.handler)(self, this, ...)
end
function Parser:fail_route()
local layer = self:pop()
layer.fail = true
set_nested(self, "failed_routes", layer.route, layer.head, layer)
self.head = layer.head
return layer
end
function Parser:traverse()
while true do
local layer = self:consume()
if layer ~= nil then
return layer
end
self:advance()
end
end
-- Converts a handler into a switch table the first time it's called, which avoids creating unnecessary objects, and prevents any scoping issues caused by parser methods being assigned to table keys before they've been declared.
-- false is used as the default key.
do
local Switch = {}
function Switch:__call(parser, this)
return (self[this] or self[false])(parser, this)
end
function Parser:switch(func, t)
local layer = self[-1]
-- Point handler to the new switch table if the calling function is the current handler.
if layer.handler == func then
layer.handler = t
end
return setmetatable(t, Switch)
end
end
-- Generate a new parser class object, which is used as the template for any parser objects. These should be customized with additional/modified methods as needed.
function Parser:new_class()
local t = {}
t.__index = t
return setmetatable(inherit_metamethods(t, self), self)
end
-- Generate a new parser object, which is used for a specific parse.
function Parser:new(text)
return setmetatable({
text = text,
head = 1
}, self)
end
function Parser:parse(data)
local parser = self:new(data.text)
local success, tokens = parser:try(unpack(data.route))
if #parser > 0 then
-- This shouldn't happen.
error("Parser exited with non-empty stack.")
elseif success then
local node = data.node
return true, node[1]:new(tokens, unpack(node, 2)), parser
elseif data.allow_fail then
return false, nil, parser
end
error("Parser exited with failed route.")
end
export.class_else_type = class_else_type
export.is_node = is_node
export.tostring = tostring
function export.new()
return Parser:new_class(), Node:new_class("node")
end
return export
5doe3lkc4ta9zlu34tmahwx9wo5wdxx
Module:labels/data/lang/id
828
20733
193335
52764
2024-09-17T06:19:06Z
en>Benwing2
0
add 'Classical'
193335
Scribunto
text/plain
local labels = {}
----Dialects go here---
labels["Bandung"] = {
Wikipedia = true,
regional_categories = true,
region = "the province of [[West Java]]",
parent = "Java",
}
labels["Banten"] = {
aliases = {"Bantenese"},
Wikipedia = true,
regional_categories = true,
region = "the province of [[Banten]]",
parent = "Java",
}
labels["Indonesia"] = {
Wikipedia = true,
regional_categories = "Indonesian",
parent = true,
}
labels["Jakarta"] = {
aliases = {"Jakartan", "Acrolectal Colloquial Jakarta", "Colloquial Jakarta"},
Wikipedia = true,
regional_categories = true,
parent = "Java",
}
labels["Jambi"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Java"] = {
aliases = {"Javanese"},
Wikipedia = true,
regional_categories = "Javanese",
prep = "on",
region = "the island of [[Java]]",
parent = "Indonesia",
}
labels["Kalimantan"] = {
Wikipedia = true,
regional_categories = true,
prep = "on",
parent = "Indonesia",
}
labels["Medan"] = {
Wikipedia = true,
regional_categories = true,
parent = "North Sumatra",
}
labels["North Sumatra"] = {
aliases = {"Sumut"},
Wikipedia = true,
regional_categories = "North Sumatran",
parent = "Sumatra",
}
labels["Padang"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Palembang"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Papuan"] = {
aliases = {"Papua"},
Wikipedia = true,
regional_categories = true,
parent = "Indonesia",
}
labels["Pontianak"] = {
Wikipedia = true,
regional_categories = true,
parent = "West Kalimantan",
}
labels["Riau"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Sumatra"] = {
Wikipedia = true,
regional_categories = "Sumatran",
prep = "on",
parent = "Indonesia",
}
labels["Surabaya"] = {
Wikipedia = true,
regional_categories = true,
parent = "Java",
}
labels["West Kalimantan"] = {
aliases = {"Kalbar"},
Wikipedia = true,
regional_categories = true,
region = "the province of [[West Kalimantan]]",
parent = "Kalimantan",
}
labels["Yogyakarta"] = {
Wikipedia = true,
regional_categories = true,
parent = "Java",
}
----Other labels go here----
labels["pre-1947"] = {
aliases = {"Van Ophuijsen spelling", "Van Ophuijsen orthography", "Dutch-based spelling", "Dutch-based orthography", "van Ophuijsen orthography", "van Ophuijsen spelling"},
Wikipedia = "Van Ophuijsen Spelling System",
plain_categories = "Indonesian pre-1947 forms",
}
labels["1947-1972"] = {
aliases = {"Republican spelling", "Republican orthography", "pre-1967", "1947-1967", "pre-1972", "Soewandi spelling", "Soewandi orthography", "Soewandi"},
Wikipedia = "Republican Spelling System",
plain_categories = "Indonesian 1947-1972 forms",
}
labels["Classical"] = {
aliases = {"classical"},
regional_categories = true,
def = "[[w:History_of_the_Malay_language#Pre-Modern_Malay_(19th_century)|pre-modern Malay]] as spoken in 19th-century [[Indonesia]]",
parent = true,
noreg = true,
}
labels["Standard Indonesian"] = {
Wikipedia = "Indonesian language#Phonology",
}
labels["Prokem"] = {
regional_categories = true,
parent = true,
region = "the former colloquial variant of [[Indonesian]] popular in the 1970's and 1980's, since replaced by {{w|Gaul Indonesian Language|Gaul Indonesian}}",
noreg = true,
}
labels["Gaul"] = {
regional_categories = true,
parent = true,
region = "the colloquial variant of [[Indonesian]] that has been established since the 1980's, originally " ..
"centered in [[Jakarta]] and based on the {{w|Betawi language}}, but now spread throughout Indonesia",
noreg = true,
}
return require("Module:labels").finalize_data(labels)
6uszeokiako54qqnlhycmnocfibqq73
193336
193335
2024-11-20T12:07:08Z
Lee
19
[[:en:Module:labels/data/lang/id]] වෙතින් එක් සංශෝධනයක්
193335
Scribunto
text/plain
local labels = {}
----Dialects go here---
labels["Bandung"] = {
Wikipedia = true,
regional_categories = true,
region = "the province of [[West Java]]",
parent = "Java",
}
labels["Banten"] = {
aliases = {"Bantenese"},
Wikipedia = true,
regional_categories = true,
region = "the province of [[Banten]]",
parent = "Java",
}
labels["Indonesia"] = {
Wikipedia = true,
regional_categories = "Indonesian",
parent = true,
}
labels["Jakarta"] = {
aliases = {"Jakartan", "Acrolectal Colloquial Jakarta", "Colloquial Jakarta"},
Wikipedia = true,
regional_categories = true,
parent = "Java",
}
labels["Jambi"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Java"] = {
aliases = {"Javanese"},
Wikipedia = true,
regional_categories = "Javanese",
prep = "on",
region = "the island of [[Java]]",
parent = "Indonesia",
}
labels["Kalimantan"] = {
Wikipedia = true,
regional_categories = true,
prep = "on",
parent = "Indonesia",
}
labels["Medan"] = {
Wikipedia = true,
regional_categories = true,
parent = "North Sumatra",
}
labels["North Sumatra"] = {
aliases = {"Sumut"},
Wikipedia = true,
regional_categories = "North Sumatran",
parent = "Sumatra",
}
labels["Padang"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Palembang"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Papuan"] = {
aliases = {"Papua"},
Wikipedia = true,
regional_categories = true,
parent = "Indonesia",
}
labels["Pontianak"] = {
Wikipedia = true,
regional_categories = true,
parent = "West Kalimantan",
}
labels["Riau"] = {
Wikipedia = true,
regional_categories = true,
parent = "Sumatra",
}
labels["Sumatra"] = {
Wikipedia = true,
regional_categories = "Sumatran",
prep = "on",
parent = "Indonesia",
}
labels["Surabaya"] = {
Wikipedia = true,
regional_categories = true,
parent = "Java",
}
labels["West Kalimantan"] = {
aliases = {"Kalbar"},
Wikipedia = true,
regional_categories = true,
region = "the province of [[West Kalimantan]]",
parent = "Kalimantan",
}
labels["Yogyakarta"] = {
Wikipedia = true,
regional_categories = true,
parent = "Java",
}
----Other labels go here----
labels["pre-1947"] = {
aliases = {"Van Ophuijsen spelling", "Van Ophuijsen orthography", "Dutch-based spelling", "Dutch-based orthography", "van Ophuijsen orthography", "van Ophuijsen spelling"},
Wikipedia = "Van Ophuijsen Spelling System",
plain_categories = "Indonesian pre-1947 forms",
}
labels["1947-1972"] = {
aliases = {"Republican spelling", "Republican orthography", "pre-1967", "1947-1967", "pre-1972", "Soewandi spelling", "Soewandi orthography", "Soewandi"},
Wikipedia = "Republican Spelling System",
plain_categories = "Indonesian 1947-1972 forms",
}
labels["Classical"] = {
aliases = {"classical"},
regional_categories = true,
def = "[[w:History_of_the_Malay_language#Pre-Modern_Malay_(19th_century)|pre-modern Malay]] as spoken in 19th-century [[Indonesia]]",
parent = true,
noreg = true,
}
labels["Standard Indonesian"] = {
Wikipedia = "Indonesian language#Phonology",
}
labels["Prokem"] = {
regional_categories = true,
parent = true,
region = "the former colloquial variant of [[Indonesian]] popular in the 1970's and 1980's, since replaced by {{w|Gaul Indonesian Language|Gaul Indonesian}}",
noreg = true,
}
labels["Gaul"] = {
regional_categories = true,
parent = true,
region = "the colloquial variant of [[Indonesian]] that has been established since the 1980's, originally " ..
"centered in [[Jakarta]] and based on the {{w|Betawi language}}, but now spread throughout Indonesia",
noreg = true,
}
return require("Module:labels").finalize_data(labels)
6uszeokiako54qqnlhycmnocfibqq73
Module:labels/data/lang/la
828
20773
193339
52804
2024-05-27T23:26:14Z
en>WingerBot
0
move 9 label(s) (commented-out Classical,Ecclesiastical,New Latin,Proto-Balkan-Romance,Proto-Gallo-Romance,Proto-Ibero-Romance,Proto-Italo-Romance,Proto-Italo-Western-Romance,Vulgar) from [[Module:accent qualifier/data]] to lang-specific labels data module; fix conflicts between [[Module:accent qualifier/data]] and lang-specific label modules (manually assisted)
193339
Scribunto
text/plain
local labels = {}
labels["British Contemporary Latin"] = {
aliases = {"Contemporary Anglo-Latin", "Contemporary British"},
display = "British [[w:Contemporary Latin|Contemporary Latin]]",
plain_categories = true,
}
labels["British Late Latin"] = {
aliases = {"Late Anglo-Latin", "Late British"},
display = "British [[w:Late Latin|Late Latin]]",
plain_categories = true,
}
labels["British Latin"] = {
aliases = {"Vulgar Anglo-Latin", "Vulgar British", "British Vulgar Latin"},
Wikipedia = true,
plain_categories = true,
}
labels["British Medieval Latin"] = {
aliases = {"British Mediaeval Latin", "Medieval Anglo-Latin", "Mediaeval Anglo-Latin", "Medieval British", "Mediaeval British"},
display = "British [[w:Medieval Latin|Medieval Latin]]",
plain_categories = true,
}
labels["British New Latin"] = {
aliases = {"New Anglo-Latin", "New British"},
display = "British [[w:New Latin|New Latin]]",
plain_categories = true,
}
labels["British Renaissance Latin"] = {
aliases = {"Renaissance Anglo-Latin", "Renaissance British"},
display = "British [[w:Renaissance Latin|Renaissance Latin]]",
plain_categories = true,
}
labels["Classical Latin"] = {
aliases = {"CL.", "classical", "Classical", "cla"},
Wikipedia = true,
plain_categories = true,
}
labels["Early Medieval Latin"] = {
aliases = {"EML."},
display = "Early [[w:Medieval Latin|Medieval Latin]]",
regional_categories = "Medieval",
plain_categories = true,
}
labels["Ecclesiastical Latin"] = {
aliases = {"Church Latin", "EL.", "ecclesiastical", "Ecclesiastical", "Ecclesiastic", "eccl"},
Wikipedia = true,
accent_Wikipedia = "Latin phonology and orthography#Ecclesiastical pronunciation",
accent_display = "modern Italianate Ecclesiastical",
plain_categories = true,
}
labels["Epigraphic Latin"] = {
aliases = {"Epigraphic", "epigraphic", "in inscriptions", "inscriptions", "epi"},
Wikipedia = "Epigraphy",
plain_categories = true,
}
labels["Late Latin"] = {
aliases = {"LL."},
Wikipedia = true,
plain_categories = true,
}
labels["Medieval Latin"] = {
aliases = {"Mediaeval Latin", "ML.", "Medieval", "Mediaeval", "medieval", "mediaeval", "med"},
Wikipedia = true,
plain_categories = true,
}
labels["Renaissance Latin"] = {
aliases = {"renaissance", "Renaissance", "ren"},
Wikipedia = true,
plain_categories = true,
}
labels["New Latin"] = {
aliases = {"Neo-Latin", "new"},
Wikipedia = true,
plain_categories = true,
}
labels["Contemporary Latin"] = {
aliases = {"contemporary", "Contemporary", "con"},
Wikipedia = true,
plain_categories = true,
}
labels["Old Latin"] = {
Wikipedia = true,
plain_categories = true,
}
labels["Old Latin lemma"] = {
display = "[[w:Old Latin|Old Latin]]",
plain_categories = "Old Latin lemmas",
}
labels["Old Latin non-lemma"] = {
display = "[[w:Old Latin|Old Latin]]",
plain_categories = "Old Latin non-lemma forms",
}
labels["pre-classical"] = {
aliases = {"Pre-classical", "pre-Classical", "Pre-Classical", "Preclassical", "preclassical", "ante-classical", "Ante-classical", "ante-Classical", "Ante-Classical", "Anteclassical", "anteclassical", "acl"},
display = "pre-Classical",
Wikipedia = "Old Latin",
regional_categories = "Old",
}
labels["pre-classical lemma"] = {
aliases = {"Pre-classical lemma", "pre-Classical lemma", "Pre-Classical lemma", "Preclassical lemma", "preclassical lemma", "ante-classical lemma", "Ante-classical lemma", "ante-Classical lemma", "Ante-Classical lemma", "Anteclassical lemma", "anteclassical lemma"},
display = "pre-Classical",
plain_categories = "Old Latin lemmas",
}
labels["pre-classical non-lemma"] = {
aliases = {"Pre-classical non-lemma", "pre-Classical non-lemma", "Pre-Classical non-lemma", "Preclassical non-lemma", "preclassical non-lemma", "ante-classical non-lemma", "Ante-classical non-lemma", "ante-Classical non-lemma", "Ante-Classical non-lemma", "Anteclassical non-lemma", "anteclassical non-lemma"},
display = "pre-Classical",
plain_categories = "Old Latin non-lemma forms",
}
labels["Vulgar Latin"] = {
aliases = {"Vulgar", "vul"},
Wikipedia = true,
plain_categories = true,
}
-- Proto-Romance:
labels["Proto-Ibero-Romance"] = {
aliases = {"PIbR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Balkan-Romance"] = {
aliases = {"PBR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Gallo-Romance"] = {
aliases = {"PGR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Italo-Western-Romance"] = {
aliases = {"PIWR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Italo-Romance"] = {
aliases = {"PItR"},
Wikipedia = "Italo-Romance languages",
plain_categories = true,
}
labels["Proto-Romance"] = {
aliases = {"PR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Western-Romance"] = {
aliases = {"PWR"},
Wikipedia = "Western Romance languages",
plain_categories = true,
}
labels["England"] = {
aliases = {"English"},
Wikipedia = true,
regional_categories = "English",
}
labels["Germany"] = {
aliases = {"German"},
Wikipedia = true,
regional_categories = "German",
}
labels["Hungary"] = {
aliases = {"Hungarian"},
Wikipedia = true,
regional_categories = "Hungarian",
}
-- labels from old [[Module:la:Dialects]]; FIXME!
labels["arh"] = {
display = "Archaic Latin",
Wikipedia = "Old Latin",
}
labels["aug"] = {
aliases = {"post-Augustan"},
display = "post-Augustan",
Wikipedia = "Late Latin#Late and post-classical Latin",
}
labels["eml"] = {
aliases = {"enl", "early modern"},
display = "early modern",
Wikipedia = "New Latin#Height",
}
labels["pcl"] = {
aliases = {"post-classical"},
display = "post-classical",
Wikipedia = "Late Latin#Late and post-classical Latin",
}
labels["vet"] = {
aliases = {"Vetus", "Vetus Latina"},
display = "Vetus Latina",
Wikipedia = "Vetus Latina",
}
return require("Module:labels").finalize_data(labels)
a6kzjnyn1w09kip7h9bwr33f5yc25r0
193340
193339
2024-11-20T12:09:54Z
Lee
19
[[:en:Module:labels/data/lang/la]] වෙතින් එක් සංශෝධනයක්
193339
Scribunto
text/plain
local labels = {}
labels["British Contemporary Latin"] = {
aliases = {"Contemporary Anglo-Latin", "Contemporary British"},
display = "British [[w:Contemporary Latin|Contemporary Latin]]",
plain_categories = true,
}
labels["British Late Latin"] = {
aliases = {"Late Anglo-Latin", "Late British"},
display = "British [[w:Late Latin|Late Latin]]",
plain_categories = true,
}
labels["British Latin"] = {
aliases = {"Vulgar Anglo-Latin", "Vulgar British", "British Vulgar Latin"},
Wikipedia = true,
plain_categories = true,
}
labels["British Medieval Latin"] = {
aliases = {"British Mediaeval Latin", "Medieval Anglo-Latin", "Mediaeval Anglo-Latin", "Medieval British", "Mediaeval British"},
display = "British [[w:Medieval Latin|Medieval Latin]]",
plain_categories = true,
}
labels["British New Latin"] = {
aliases = {"New Anglo-Latin", "New British"},
display = "British [[w:New Latin|New Latin]]",
plain_categories = true,
}
labels["British Renaissance Latin"] = {
aliases = {"Renaissance Anglo-Latin", "Renaissance British"},
display = "British [[w:Renaissance Latin|Renaissance Latin]]",
plain_categories = true,
}
labels["Classical Latin"] = {
aliases = {"CL.", "classical", "Classical", "cla"},
Wikipedia = true,
plain_categories = true,
}
labels["Early Medieval Latin"] = {
aliases = {"EML."},
display = "Early [[w:Medieval Latin|Medieval Latin]]",
regional_categories = "Medieval",
plain_categories = true,
}
labels["Ecclesiastical Latin"] = {
aliases = {"Church Latin", "EL.", "ecclesiastical", "Ecclesiastical", "Ecclesiastic", "eccl"},
Wikipedia = true,
accent_Wikipedia = "Latin phonology and orthography#Ecclesiastical pronunciation",
accent_display = "modern Italianate Ecclesiastical",
plain_categories = true,
}
labels["Epigraphic Latin"] = {
aliases = {"Epigraphic", "epigraphic", "in inscriptions", "inscriptions", "epi"},
Wikipedia = "Epigraphy",
plain_categories = true,
}
labels["Late Latin"] = {
aliases = {"LL."},
Wikipedia = true,
plain_categories = true,
}
labels["Medieval Latin"] = {
aliases = {"Mediaeval Latin", "ML.", "Medieval", "Mediaeval", "medieval", "mediaeval", "med"},
Wikipedia = true,
plain_categories = true,
}
labels["Renaissance Latin"] = {
aliases = {"renaissance", "Renaissance", "ren"},
Wikipedia = true,
plain_categories = true,
}
labels["New Latin"] = {
aliases = {"Neo-Latin", "new"},
Wikipedia = true,
plain_categories = true,
}
labels["Contemporary Latin"] = {
aliases = {"contemporary", "Contemporary", "con"},
Wikipedia = true,
plain_categories = true,
}
labels["Old Latin"] = {
Wikipedia = true,
plain_categories = true,
}
labels["Old Latin lemma"] = {
display = "[[w:Old Latin|Old Latin]]",
plain_categories = "Old Latin lemmas",
}
labels["Old Latin non-lemma"] = {
display = "[[w:Old Latin|Old Latin]]",
plain_categories = "Old Latin non-lemma forms",
}
labels["pre-classical"] = {
aliases = {"Pre-classical", "pre-Classical", "Pre-Classical", "Preclassical", "preclassical", "ante-classical", "Ante-classical", "ante-Classical", "Ante-Classical", "Anteclassical", "anteclassical", "acl"},
display = "pre-Classical",
Wikipedia = "Old Latin",
regional_categories = "Old",
}
labels["pre-classical lemma"] = {
aliases = {"Pre-classical lemma", "pre-Classical lemma", "Pre-Classical lemma", "Preclassical lemma", "preclassical lemma", "ante-classical lemma", "Ante-classical lemma", "ante-Classical lemma", "Ante-Classical lemma", "Anteclassical lemma", "anteclassical lemma"},
display = "pre-Classical",
plain_categories = "Old Latin lemmas",
}
labels["pre-classical non-lemma"] = {
aliases = {"Pre-classical non-lemma", "pre-Classical non-lemma", "Pre-Classical non-lemma", "Preclassical non-lemma", "preclassical non-lemma", "ante-classical non-lemma", "Ante-classical non-lemma", "ante-Classical non-lemma", "Ante-Classical non-lemma", "Anteclassical non-lemma", "anteclassical non-lemma"},
display = "pre-Classical",
plain_categories = "Old Latin non-lemma forms",
}
labels["Vulgar Latin"] = {
aliases = {"Vulgar", "vul"},
Wikipedia = true,
plain_categories = true,
}
-- Proto-Romance:
labels["Proto-Ibero-Romance"] = {
aliases = {"PIbR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Balkan-Romance"] = {
aliases = {"PBR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Gallo-Romance"] = {
aliases = {"PGR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Italo-Western-Romance"] = {
aliases = {"PIWR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Italo-Romance"] = {
aliases = {"PItR"},
Wikipedia = "Italo-Romance languages",
plain_categories = true,
}
labels["Proto-Romance"] = {
aliases = {"PR"},
Wikipedia = true,
plain_categories = true,
}
labels["Proto-Western-Romance"] = {
aliases = {"PWR"},
Wikipedia = "Western Romance languages",
plain_categories = true,
}
labels["England"] = {
aliases = {"English"},
Wikipedia = true,
regional_categories = "English",
}
labels["Germany"] = {
aliases = {"German"},
Wikipedia = true,
regional_categories = "German",
}
labels["Hungary"] = {
aliases = {"Hungarian"},
Wikipedia = true,
regional_categories = "Hungarian",
}
-- labels from old [[Module:la:Dialects]]; FIXME!
labels["arh"] = {
display = "Archaic Latin",
Wikipedia = "Old Latin",
}
labels["aug"] = {
aliases = {"post-Augustan"},
display = "post-Augustan",
Wikipedia = "Late Latin#Late and post-classical Latin",
}
labels["eml"] = {
aliases = {"enl", "early modern"},
display = "early modern",
Wikipedia = "New Latin#Height",
}
labels["pcl"] = {
aliases = {"post-classical"},
display = "post-classical",
Wikipedia = "Late Latin#Late and post-classical Latin",
}
labels["vet"] = {
aliases = {"Vetus", "Vetus Latina"},
display = "Vetus Latina",
Wikipedia = "Vetus Latina",
}
return require("Module:labels").finalize_data(labels)
a6kzjnyn1w09kip7h9bwr33f5yc25r0
Module:labels/data/lang/lad
828
20775
193341
52806
2024-04-27T09:28:41Z
en>SurjectionBot
0
Protected "[[Module:labels/data/lang/lad]]": (bot) automatically protect highly visible templates/modules (reference score: 1137+ >= 1000) ([Edit=Allow only autoconfirmed users] (indefinite) [Move=Allow only autoconfirmed users] (indefinite))
193341
Scribunto
text/plain
local labels = {}
labels["Haketia"] = {
aliases = {"Hakitia", "Haquitía"},
Wikipedia = true,
plain_categories = true,
}
return require("Module:labels").finalize_data(labels)
4y5uc4a71zw2szte0vg5nl5shwqihiy
193342
193341
2024-11-20T12:10:10Z
Lee
19
[[:en:Module:labels/data/lang/lad]] වෙතින් එක් සංශෝධනයක්
193341
Scribunto
text/plain
local labels = {}
labels["Haketia"] = {
aliases = {"Hakitia", "Haquitía"},
Wikipedia = true,
plain_categories = true,
}
return require("Module:labels").finalize_data(labels)
4y5uc4a71zw2szte0vg5nl5shwqihiy
Module:labels/data/lang/pt
828
20873
193327
52904
2024-10-13T17:11:23Z
en>Sérgio R R Santos
0
added a few shortcuts
193327
Scribunto
text/plain
local labels = {}
labels["Africa"] = {
aliases = {"African"},
Wikipedia = true,
regional_categories = "African",
parent = true,
}
labels["Alagoas"] = {
region = "[[Alagoas]], a state of [[Brazil]]",
aliases = {"Alagoano", "Alagoan"},
Wikipedia = true,
regional_categories = "Alagoan",
parent = "Northeast Brazil",
}
labels["Alentejo"] = {
aliases = {"Alentejan", "alent"},
region = "[[Alentejo]], a {{w|NUTS statistical regions of Portugal|region}} of [[Portugal]]",
Wikipedia = "Alentejan Portuguese",
regional_categories = "Alentejano",
parent = "Portugal",
}
labels["Algarve"] = {
aliases = {"Algarvio", "alg"},
region = "[[Algarve]], a {{w|NUTS statistical regions of Portugal|region}} of [[Portugal]]",
Wikipedia = "Algarvian Portuguese",
regional_categories = "Algarvian",
parent = "Portugal",
}
labels["Amazonas"] = {
region = "[[Amazonas]], a state of [[Brazil]]",
aliases = {"Amazonense"},
Wikipedia = "Amazonas (Brazilian state)",
regional_categories = "Amazonense",
parent = "North Brazil",
}
labels["Angola"] = {
aliases = {"Angolan", "ao"},
Wikipedia = "Angolan Portuguese",
regional_categories = "Angolan",
parent = "Africa",
}
labels["Asia"] = {
aliases = {"Asian"},
Wikipedia = true,
regional_categories = "Asian",
parent = true,
}
labels["Azores"] = {
the = true,
aliases = {"Azorean", "Azorian", "azo"},
Wikipedia = true,
regional_categories = "Azorean",
parent = "Portugal",
}
labels["Bahia"] = {
region = "[[Bahia]], a state of [[Brazil]]",
aliases = {"Baiano", "Bahian"},
Wikipedia = true,
regional_categories = "Bahian",
parent = "Northeast Brazil",
}
labels["Beira"] = {
region = "[[Beira]], a {{w|Provinces of Portugal|traditional province}} of [[Portugal]]",
Wikipedia = "Beira (Portugal)",
regional_categories = "Beirão",
parent = "Portugal",
}
labels["Brazil"] = {
aliases = {"BR", "br", "Brazilian"},
Wikipedia = "Brazilian Portuguese",
regional_categories = "Brazilian",
parent = true,
}
labels["Cape Verde"] = {
aliases = {"Cape Verdean", "Verdean", "Cabo Verde", "Cabo Verdean", "cv"},
aliases = {"Cape Verdean"},
Wikipedia = "Cape Verdean Portuguese",
regional_categories = "Cape Verdean",
parent = "Africa",
}
labels["Ceará"] = {
region = "[[Ceará]], a state of [[Brazil]]",
aliases = {"Cearense"},
Wikipedia = true,
regional_categories = "Cearense",
parent = "Northeast Brazil",
}
labels["Central-West Brazil"] = {
region = "the {{w|Central-West Region, Brazil}}",
aliases = {"Centro-Oestino", "Centro-Oeste", "Central-Western Brazilian", "Central-West Brazilian"},
Wikipedia = "Central-West Region, Brazil",
regional_categories = "Central-Western Brazilian",
parent = "Brazil",
}
labels["Goa"] = {
aliases = {"Goan"},
Wikipedia = "Goan Portuguese",
regional_categories = "Goan",
}
labels["Goiás"] = {
region = "[[Goiás]], a state of [[Brazil]]",
aliases = {"Goiano"},
Wikipedia = true,
regional_categories = "Goiano",
parent = "Central-West Brazil",
}
labels["Guinea-Bissau"] = {
aliases = {"gw", "Guinean", "Guinea-Bissauan"},
Wikipedia = "Guinea-Bissau Portuguese",
regional_categories = true,
parent = "Africa",
}
labels["India"] = {
aliases = {"Indian"},
Wikipedia = {"Goan Portuguese", "Portuguese India"},
regional_categories = "Indian",
parent = "Asia",
}
labels["Macau"] = {
aliases = {"Macao", "Macanese", "mo"},
Wikipedia = "Macau Portuguese",
regional_categories = "Macanese",
parent = "Asia",
}
labels["Madeira"] = {
aliases = {"Madeiran", "mad"},
Wikipedia = true,
regional_categories = "Madeiran",
parent = "Portugal",
}
labels["Maranhão"] = {
region = "[[Maranhão]], a state of [[Brazil]]",
aliases = {"Maranhense"},
Wikipedia = true,
regional_categories = "Maranhense",
parent = "Northeast Brazil",
}
labels["Mato Grosso"] = {
region = "[[Mato Grosso]] and [[Mato Grosso do Sul]], two adjacent states of [[Brazil]]",
aliases = {"Mato-Grossense"},
Wikipedia = true,
regional_categories = "Mato-Grossense",
parent = "Central-West Brazil",
}
labels["Minas Gerais"] = {
region = "[[Minas Gerais]], a state of [[Brazil]]",
aliases = {"Mineiro"},
Wikipedia = true,
regional_categories = "Mineiro",
parent = "Southeast Brazil",
}
labels["Mozambique"] = {
aliases = {"Mozambican", "mz", "moz"},
Wikipedia = "Mozambican Portuguese",
regional_categories = "Mozambican",
parent = "Africa",
}
labels["North Brazil"] = {
aliases = {"Nortista", "Norteiro", "Northern Brazilian", "North Brazilian", "Amazon"},
Wikipedia = "North Region, Brazil",
regional_categories = "Northern Brazilian",
parent = "Brazil",
}
labels["Northeast Brazil"] = {
aliases = {"Nordestino", "Northeastern Brazilian", "Northeast Brazilian"},
Wikipedia = "Northeast Region, Brazil",
regional_categories = "Northeastern Brazilian",
parent = "Brazil",
}
labels["Northern Portugal"] = {
aliases = {"Nortenho", "North Portugal"},
Wikipedia = "Northern Portuguese",
regional_categories = true,
parent = "Portugal",
}
labels["Paraná"] = {
region = "[[Paraná]], a state of [[Brazil]]",
aliases = {"Paranaense"},
Wikipedia = "Paraná (state)",
regional_categories = "Paranaense",
parent = "South Brazil",
}
labels["Pernambuco"] = {
region = "[[Pernambuco]], a state of [[Brazil]]",
aliases = {"Pernambucano", "Pernambucan"},
Wikipedia = true,
regional_categories = "Pernambucan",
parent = "Northeast Brazil",
}
labels["Piauí"] = {
region = "[[Piauí]], a state of [[Brazil]]",
aliases = {"Piauiense"},
Wikipedia = true,
regional_categories = "Piauiense",
parent = "Northeast Brazil",
}
labels["Portugal"] = {
aliases = {"Portuguese", "PT", "pt", "European", "Europe"},
Wikipedia = "European Portuguese",
regional_categories = "European",
parent = true,
}
labels["Rio Grande do Norte"] = {
region = "[[Rio Grande do Norte]], a state of [[Brazil]]",
aliases = {"Potiguar", "Norte-Rio-Grandense"},
Wikipedia = true,
regional_categories = "Potiguar",
parent = "Northeast Brazil",
}
labels["Rio Grande do Sul"] = {
region = "[[Rio Grande do Sul]], a state of [[Brazil]]",
aliases = {"Gaúcho", "Gaucho"},
Wikipedia = true,
regional_categories = "Gaúcho",
parent = "South Brazil",
}
labels["Rio de Janeiro"] = {
region = "[[Rio de Janeiro]], a city and surrounding state of [[Brazil]]",
aliases = {"Fluminense", "Carioca"},
Wikipedia = {"Carioca#Sociolect", true},
regional_categories = "Carioca",
parent = "Southeast Brazil",
}
labels["Santa Catarina"] = {
region = "[[Santa Catarina]], a state of [[Brazil]]",
aliases = {"Catarinense"},
Wikipedia = "Santa Catarina (state)",
regional_categories = "Catarinense",
parent = "South Brazil",
}
labels["São Paulo"] = {
region = "[[São Paulo]], a city and surrounding state of [[Brazil]]",
aliases = {"Sao Paulo", "Paulista"},
Wikipedia = {"Paulistano dialect", "São Paulo (state)"},
regional_categories = "Paulista",
parent = "Southeast Brazil",
}
labels["São Tomé and Príncipe"] = {
aliases = {"Santomean", "São Tomé", "São Toméan", "Sao Tomean", "st", "Sao Tome and Principe", "Sao Tome"},
Wikipedia = "São Tomé and Príncipe Portuguese",
regional_categories = "Santomean",
parent = "Africa",
}
labels["Sergipe"] = {
region = "[[Sergipe]], a state of [[Brazil]]",
aliases = {"Sergipano", "Sergipan"},
Wikipedia = true,
regional_categories = "Sergipan",
parent = "Northeast Brazil",
}
labels["South Africa"] = {
aliases = {"South African"},
Wikipedia = true,
regional_categories = "South African",
parent = "Africa",
}
labels["South Brazil"] = {
aliases = {"Southern Brazilian", "South Brazilian"},
Wikipedia = "South Region, Brazil",
regional_categories = "Southern Brazilian",
parent = "Brazil",
}
labels["Southeast Brazil"] = {
aliases = {"Sudestino", "Sudeste", "Southeastern Brazilian", "Southeast Brazilian"},
Wikipedia = "Southeast Region, Brazil",
regional_categories = "Southeastern Brazilian",
parent = "Brazil",
}
labels["Timorese"] = {
aliases = {"tl", "Timor Leste", "Timor-Leste", "Timor", "East Timor"},
Wikipedia = "Timorese Portuguese",
regional_categories = true,
parent = "Asia",
}
labels["Trás-os-Montes"] = {
Wikipedia = true,
regional_categories = "Transmontane",
parent = "Portugal",
}
labels["Uruguay"] = {
Wikipedia = "Uruguayan Portuguese",
regional_categories = "Uruguayan",
parent = true,
}
labels["US"] = {
region = "the [[United States]]",
aliases = {"U.S.", "United States", "United States of America", "USA", "America", "American"}, -- America/American: should these be aliases of 'North America'?
Wikipedia = {"Portuguese Americans", "Brazilian Americans"},
regional_categories = "American",
parent = true,
}
labels["pajubá"] = {
fulldef = "Brazilian cryptolect spoken by practitioners of Afro-Brazilian religions and the LGBT community.",
noreg = true,
aliases = {"Pajubá", "bajubá", "Bajubá"},
Wikipedia = true,
plain_categories = "Pajubá",
parent = true,
othercat = "Portuguese cant",
}
labels["pre-1990"] = {
aliases = {"pre-1990 spelling", "pre-AO90", "AO45"},
display = "pre-1990 spelling",
Wikipedia = "Portuguese Language Orthographic Agreement of 1990",
}
-- also used for [[Template:standard spelling of]] et al
labels["Brazilian Portuguese spelling"] = {
aliases = {"Brazilian orthography", "Brazilian Portuguese form", "pt-br form"},
display = "Brazilian Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "Brazilian Portuguese forms",
}
-- kludge, needed here for [[Template:pt-verb form of]] et al
labels["Brazilian Portuguese verb form"] = {
display = "Brazilian Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "Brazilian Portuguese verb forms",
}
-- also used for [[Template:standard spelling of]] et al
labels["European Portuguese spelling"] = {
aliases = {"European Portuguese orthography","European Portuguese form"},
display = "European Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "European Portuguese forms",
}
-- kludge, needed here for [[Template:pt-verb form of]] et al
labels["European Portuguese verb form"] = {
display = "European Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "European Portuguese verb forms",
}
return require("Module:labels").finalize_data(labels)
pide3p0xdpk6e2au7118fqhypyhf7bm
193328
193327
2024-11-20T12:01:34Z
Lee
19
[[:en:Module:labels/data/lang/pt]] වෙතින් එක් සංශෝධනයක්
193327
Scribunto
text/plain
local labels = {}
labels["Africa"] = {
aliases = {"African"},
Wikipedia = true,
regional_categories = "African",
parent = true,
}
labels["Alagoas"] = {
region = "[[Alagoas]], a state of [[Brazil]]",
aliases = {"Alagoano", "Alagoan"},
Wikipedia = true,
regional_categories = "Alagoan",
parent = "Northeast Brazil",
}
labels["Alentejo"] = {
aliases = {"Alentejan", "alent"},
region = "[[Alentejo]], a {{w|NUTS statistical regions of Portugal|region}} of [[Portugal]]",
Wikipedia = "Alentejan Portuguese",
regional_categories = "Alentejano",
parent = "Portugal",
}
labels["Algarve"] = {
aliases = {"Algarvio", "alg"},
region = "[[Algarve]], a {{w|NUTS statistical regions of Portugal|region}} of [[Portugal]]",
Wikipedia = "Algarvian Portuguese",
regional_categories = "Algarvian",
parent = "Portugal",
}
labels["Amazonas"] = {
region = "[[Amazonas]], a state of [[Brazil]]",
aliases = {"Amazonense"},
Wikipedia = "Amazonas (Brazilian state)",
regional_categories = "Amazonense",
parent = "North Brazil",
}
labels["Angola"] = {
aliases = {"Angolan", "ao"},
Wikipedia = "Angolan Portuguese",
regional_categories = "Angolan",
parent = "Africa",
}
labels["Asia"] = {
aliases = {"Asian"},
Wikipedia = true,
regional_categories = "Asian",
parent = true,
}
labels["Azores"] = {
the = true,
aliases = {"Azorean", "Azorian", "azo"},
Wikipedia = true,
regional_categories = "Azorean",
parent = "Portugal",
}
labels["Bahia"] = {
region = "[[Bahia]], a state of [[Brazil]]",
aliases = {"Baiano", "Bahian"},
Wikipedia = true,
regional_categories = "Bahian",
parent = "Northeast Brazil",
}
labels["Beira"] = {
region = "[[Beira]], a {{w|Provinces of Portugal|traditional province}} of [[Portugal]]",
Wikipedia = "Beira (Portugal)",
regional_categories = "Beirão",
parent = "Portugal",
}
labels["Brazil"] = {
aliases = {"BR", "br", "Brazilian"},
Wikipedia = "Brazilian Portuguese",
regional_categories = "Brazilian",
parent = true,
}
labels["Cape Verde"] = {
aliases = {"Cape Verdean", "Verdean", "Cabo Verde", "Cabo Verdean", "cv"},
aliases = {"Cape Verdean"},
Wikipedia = "Cape Verdean Portuguese",
regional_categories = "Cape Verdean",
parent = "Africa",
}
labels["Ceará"] = {
region = "[[Ceará]], a state of [[Brazil]]",
aliases = {"Cearense"},
Wikipedia = true,
regional_categories = "Cearense",
parent = "Northeast Brazil",
}
labels["Central-West Brazil"] = {
region = "the {{w|Central-West Region, Brazil}}",
aliases = {"Centro-Oestino", "Centro-Oeste", "Central-Western Brazilian", "Central-West Brazilian"},
Wikipedia = "Central-West Region, Brazil",
regional_categories = "Central-Western Brazilian",
parent = "Brazil",
}
labels["Goa"] = {
aliases = {"Goan"},
Wikipedia = "Goan Portuguese",
regional_categories = "Goan",
}
labels["Goiás"] = {
region = "[[Goiás]], a state of [[Brazil]]",
aliases = {"Goiano"},
Wikipedia = true,
regional_categories = "Goiano",
parent = "Central-West Brazil",
}
labels["Guinea-Bissau"] = {
aliases = {"gw", "Guinean", "Guinea-Bissauan"},
Wikipedia = "Guinea-Bissau Portuguese",
regional_categories = true,
parent = "Africa",
}
labels["India"] = {
aliases = {"Indian"},
Wikipedia = {"Goan Portuguese", "Portuguese India"},
regional_categories = "Indian",
parent = "Asia",
}
labels["Macau"] = {
aliases = {"Macao", "Macanese", "mo"},
Wikipedia = "Macau Portuguese",
regional_categories = "Macanese",
parent = "Asia",
}
labels["Madeira"] = {
aliases = {"Madeiran", "mad"},
Wikipedia = true,
regional_categories = "Madeiran",
parent = "Portugal",
}
labels["Maranhão"] = {
region = "[[Maranhão]], a state of [[Brazil]]",
aliases = {"Maranhense"},
Wikipedia = true,
regional_categories = "Maranhense",
parent = "Northeast Brazil",
}
labels["Mato Grosso"] = {
region = "[[Mato Grosso]] and [[Mato Grosso do Sul]], two adjacent states of [[Brazil]]",
aliases = {"Mato-Grossense"},
Wikipedia = true,
regional_categories = "Mato-Grossense",
parent = "Central-West Brazil",
}
labels["Minas Gerais"] = {
region = "[[Minas Gerais]], a state of [[Brazil]]",
aliases = {"Mineiro"},
Wikipedia = true,
regional_categories = "Mineiro",
parent = "Southeast Brazil",
}
labels["Mozambique"] = {
aliases = {"Mozambican", "mz", "moz"},
Wikipedia = "Mozambican Portuguese",
regional_categories = "Mozambican",
parent = "Africa",
}
labels["North Brazil"] = {
aliases = {"Nortista", "Norteiro", "Northern Brazilian", "North Brazilian", "Amazon"},
Wikipedia = "North Region, Brazil",
regional_categories = "Northern Brazilian",
parent = "Brazil",
}
labels["Northeast Brazil"] = {
aliases = {"Nordestino", "Northeastern Brazilian", "Northeast Brazilian"},
Wikipedia = "Northeast Region, Brazil",
regional_categories = "Northeastern Brazilian",
parent = "Brazil",
}
labels["Northern Portugal"] = {
aliases = {"Nortenho", "North Portugal"},
Wikipedia = "Northern Portuguese",
regional_categories = true,
parent = "Portugal",
}
labels["Paraná"] = {
region = "[[Paraná]], a state of [[Brazil]]",
aliases = {"Paranaense"},
Wikipedia = "Paraná (state)",
regional_categories = "Paranaense",
parent = "South Brazil",
}
labels["Pernambuco"] = {
region = "[[Pernambuco]], a state of [[Brazil]]",
aliases = {"Pernambucano", "Pernambucan"},
Wikipedia = true,
regional_categories = "Pernambucan",
parent = "Northeast Brazil",
}
labels["Piauí"] = {
region = "[[Piauí]], a state of [[Brazil]]",
aliases = {"Piauiense"},
Wikipedia = true,
regional_categories = "Piauiense",
parent = "Northeast Brazil",
}
labels["Portugal"] = {
aliases = {"Portuguese", "PT", "pt", "European", "Europe"},
Wikipedia = "European Portuguese",
regional_categories = "European",
parent = true,
}
labels["Rio Grande do Norte"] = {
region = "[[Rio Grande do Norte]], a state of [[Brazil]]",
aliases = {"Potiguar", "Norte-Rio-Grandense"},
Wikipedia = true,
regional_categories = "Potiguar",
parent = "Northeast Brazil",
}
labels["Rio Grande do Sul"] = {
region = "[[Rio Grande do Sul]], a state of [[Brazil]]",
aliases = {"Gaúcho", "Gaucho"},
Wikipedia = true,
regional_categories = "Gaúcho",
parent = "South Brazil",
}
labels["Rio de Janeiro"] = {
region = "[[Rio de Janeiro]], a city and surrounding state of [[Brazil]]",
aliases = {"Fluminense", "Carioca"},
Wikipedia = {"Carioca#Sociolect", true},
regional_categories = "Carioca",
parent = "Southeast Brazil",
}
labels["Santa Catarina"] = {
region = "[[Santa Catarina]], a state of [[Brazil]]",
aliases = {"Catarinense"},
Wikipedia = "Santa Catarina (state)",
regional_categories = "Catarinense",
parent = "South Brazil",
}
labels["São Paulo"] = {
region = "[[São Paulo]], a city and surrounding state of [[Brazil]]",
aliases = {"Sao Paulo", "Paulista"},
Wikipedia = {"Paulistano dialect", "São Paulo (state)"},
regional_categories = "Paulista",
parent = "Southeast Brazil",
}
labels["São Tomé and Príncipe"] = {
aliases = {"Santomean", "São Tomé", "São Toméan", "Sao Tomean", "st", "Sao Tome and Principe", "Sao Tome"},
Wikipedia = "São Tomé and Príncipe Portuguese",
regional_categories = "Santomean",
parent = "Africa",
}
labels["Sergipe"] = {
region = "[[Sergipe]], a state of [[Brazil]]",
aliases = {"Sergipano", "Sergipan"},
Wikipedia = true,
regional_categories = "Sergipan",
parent = "Northeast Brazil",
}
labels["South Africa"] = {
aliases = {"South African"},
Wikipedia = true,
regional_categories = "South African",
parent = "Africa",
}
labels["South Brazil"] = {
aliases = {"Southern Brazilian", "South Brazilian"},
Wikipedia = "South Region, Brazil",
regional_categories = "Southern Brazilian",
parent = "Brazil",
}
labels["Southeast Brazil"] = {
aliases = {"Sudestino", "Sudeste", "Southeastern Brazilian", "Southeast Brazilian"},
Wikipedia = "Southeast Region, Brazil",
regional_categories = "Southeastern Brazilian",
parent = "Brazil",
}
labels["Timorese"] = {
aliases = {"tl", "Timor Leste", "Timor-Leste", "Timor", "East Timor"},
Wikipedia = "Timorese Portuguese",
regional_categories = true,
parent = "Asia",
}
labels["Trás-os-Montes"] = {
Wikipedia = true,
regional_categories = "Transmontane",
parent = "Portugal",
}
labels["Uruguay"] = {
Wikipedia = "Uruguayan Portuguese",
regional_categories = "Uruguayan",
parent = true,
}
labels["US"] = {
region = "the [[United States]]",
aliases = {"U.S.", "United States", "United States of America", "USA", "America", "American"}, -- America/American: should these be aliases of 'North America'?
Wikipedia = {"Portuguese Americans", "Brazilian Americans"},
regional_categories = "American",
parent = true,
}
labels["pajubá"] = {
fulldef = "Brazilian cryptolect spoken by practitioners of Afro-Brazilian religions and the LGBT community.",
noreg = true,
aliases = {"Pajubá", "bajubá", "Bajubá"},
Wikipedia = true,
plain_categories = "Pajubá",
parent = true,
othercat = "Portuguese cant",
}
labels["pre-1990"] = {
aliases = {"pre-1990 spelling", "pre-AO90", "AO45"},
display = "pre-1990 spelling",
Wikipedia = "Portuguese Language Orthographic Agreement of 1990",
}
-- also used for [[Template:standard spelling of]] et al
labels["Brazilian Portuguese spelling"] = {
aliases = {"Brazilian orthography", "Brazilian Portuguese form", "pt-br form"},
display = "Brazilian Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "Brazilian Portuguese forms",
}
-- kludge, needed here for [[Template:pt-verb form of]] et al
labels["Brazilian Portuguese verb form"] = {
display = "Brazilian Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "Brazilian Portuguese verb forms",
}
-- also used for [[Template:standard spelling of]] et al
labels["European Portuguese spelling"] = {
aliases = {"European Portuguese orthography","European Portuguese form"},
display = "European Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "European Portuguese forms",
}
-- kludge, needed here for [[Template:pt-verb form of]] et al
labels["European Portuguese verb form"] = {
display = "European Portuguese spelling",
Wikipedia = "Portuguese orthography#Brazilian vs. European spelling",
plain_categories = "European Portuguese verb forms",
}
return require("Module:labels").finalize_data(labels)
pide3p0xdpk6e2au7118fqhypyhf7bm
සැකිල්ල:R:yue:Multi-function Chinese Character Database
10
21171
193414
54508
2024-11-21T09:31:49Z
Lee
19
193414
wikitext
text/x-wiki
{{cite-web
|entryurl=https://humanum.arts.cuhk.edu.hk/Lexis/lexi-mf/<!--
-->{{#ifeq:{{#invoke:string|len|s=<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude>}}<!--
-->|1<!--
-->|search.php?word=<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude><!--
-->|<!--
-->}}
|entry=<!--
-->{{#ifeq:{{#invoke:string|len|s=<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude>}}<!--
-->|1<!--
-->|{{lang|zh|<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude>}}<!--
-->|<!--
-->}}
|work={{lang|zh|sc=Hant|漢語多功能字庫}} (Multi-function චීන අනුලක්ෂණ දත්ත සමුදාය)
|url=https://humanum.arts.cuhk.edu.hk/Lexis/lexi-mf/
|publisher={{lw|zh|香港中文大學|tr=-}} (the {{w|Chinese University of Hong Kong}})
|year=2014–
}}<noinclude>[[Category:Cantonese reference templates|Multi-function Chinese Character Database]]</noinclude>
kfwjc0le7j8riwo5skc9g3z1mu5t1vp
193415
193414
2024-11-21T09:32:45Z
Lee
19
193415
wikitext
text/x-wiki
{{cite-web
|entryurl=https://humanum.arts.cuhk.edu.hk/Lexis/lexi-mf/<!--
-->{{#ifeq:{{#invoke:string|len|s=<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude>}}<!--
-->|1<!--
-->|search.php?word=<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude><!--
-->|<!--
-->}}
|entry=<!--
-->{{#ifeq:{{#invoke:string|len|s=<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude>}}<!--
-->|1<!--
-->|{{lang|zh|<includeonly>{{{1|{{PAGENAME}}}}}</includeonly><noinclude>字</noinclude>}}<!--
-->|<!--
-->}}
|work={{lang|zh|sc=Hant|漢語多功能字庫}} (Multi-function චීන අනුලක්ෂණ දත්ත සමුදාය)
|url=https://humanum.arts.cuhk.edu.hk/Lexis/lexi-mf/
|publisher={{lw|zh|香港中文大學|tr=-}} ({{w|Chinese University of Hong Kong|හොංකොං චීන විශ්වවිද්යාලය}})
|year=2014–
}}<noinclude>[[Category:Cantonese reference templates|Multi-function Chinese Character Database]]</noinclude>
rto9prtp0aiq7t4t2ts1rd6orl76iif
සැකිල්ල:R:JLect
10
21436
193418
55455
2024-11-21T09:42:05Z
Lee
19
193418
wikitext
text/x-wiki
{{R:Reference-meta
|entry = {{lang|mul|sc=Jpan|{{{entry|{{{1|{{PAGENAME}}}}}}}}}}
|url = https://www.jlect.com/entry/{{urlencode:{{{2|}}}}}/{{urlencode:{{{3|}}}}}/
|reference = JLect - Japonic භාෂා සහ Dialects දත්ත සමුදාය ශබ්දකෝෂය
|date = 2019
|accessdate = {{{accessdate|{{{4|}}}}}}
}}<noinclude>{{documentation}}{{reference template cat|ja|kzg|xug|mvi|ryn|okn|ryu|ams|tkn|rys|yoi|yox}}</noinclude>
o0jisyuoqmrwrn523c9yes6adazxv4t
Module:checkparams
828
78233
193477
183630
2024-11-19T15:41:11Z
en>Theknightwho
0
Code readability.
193477
Scribunto
text/plain
local export = {}
local debug_module = "Module:debug"
local maintenance_category_module = "Module:maintenance category"
local parameters_module = "Module:parameters"
local string_utilities_module = "Module:string utilities"
local template_parser_module = "Module:template parser"
local utilities_module = "Module:utilities"
local concat = table.concat
local get_current_title = mw.title.getCurrentTitle
local html_create = mw.html.create
local match = string.match
local new_title = mw.title.new
local next = next
local pairs = pairs
local require = require
local select = select
local sort = table.sort
local tostring = tostring
local type = type
--[==[
Loaders for functions in other modules, which overwrite themselves with the target function when called. This ensures modules are only loaded when needed, retains the speed/convenience of locally-declared pre-loaded functions, and has no overhead after the first call, since the target functions are called directly in any subsequent calls.]==]
local function find_parameters(...)
find_parameters = require(template_parser_module).find_parameters
return find_parameters(...)
end
local function format_categories(...)
format_categories = require(utilities_module).format_categories
return format_categories(...)
end
local function formatted_error(...)
formatted_error = require(debug_module).formatted_error
return formatted_error(...)
end
local function gsplit(...)
gsplit = require(string_utilities_module).gsplit
return gsplit(...)
end
local function process_params(...)
process_params = require(parameters_module).process
return process_params(...)
end
local function scribunto_param_key(...)
scribunto_param_key = require(string_utilities_module).scribunto_param_key
return scribunto_param_key(...)
end
local function uses_hidden_category(...)
uses_hidden_category = require(maintenance_category_module).uses_hidden_category
return uses_hidden_category(...)
end
-- Returns a table of all arguments in `template_args` which are not supported
-- by `template_title` or listed in `additional`.
local function get_invalid_args(template_title, template_args, additional)
local content = template_title:getContent()
if not content then
-- This should only be possible if the input frame has been tampered with.
error("Could not retrieve the page content of \"" .. template_title.prefixedText .. "\".")
end
local allowed_params, seen = {}, {}
-- Detect all params used by the parent template. param:get_name() takes the
-- parent frame arg table as an argument so that preprocessing will take
-- them into account, since it will matter if the name contains another
-- parameter (e.g. the outer param in "{{{foo{{{bar}}}baz}}}" will change
-- depending on the value for bar=). `seen` memoizes results based on the
-- raw parameter text (which is stored as a string in the parameter object),
-- which avoids unnecessary param:get_name() calls, which are non-trivial.
for param in find_parameters(content) do
local raw = param.raw
if not seen[raw] then
allowed_params[param:get_name(template_args)] = true
seen[raw] = true
end
end
-- If frame.args[1] contains a comma separated list of param names, add
-- those as well.
if additional then
for param in gsplit(additional, ",", true) do
-- scribunto_param_key normalizes the param into the form returned
-- by param:get_name() (i.e. trimmed and converted to a number if
-- appropriate).
allowed_params[scribunto_param_key(param)] = true
end
end
local invalid_args = select(2, process_params(
template_args,
allowed_params,
"return unknown"
))
if not next(invalid_args) then
return invalid_args
end
-- Some templates use params 1 and 3 without using 2, which means that 2
-- will be in the list of invalid args when used as an empty placeholder
-- (e.g. {{foo|foo||bar}}). Detect and remove any empty positional
-- placeholder args.
local max_pos = 0
for param in pairs(allowed_params) do
if type(param) == "number" and param > max_pos then
max_pos = param
end
end
for param, arg in pairs(invalid_args) do
if (
type(param) == "number" and
param >= 1 and
param < max_pos and
-- Ignore if arg is empty, or only contains chars trimmed by
-- MediaWiki when handling named parameters.
not match(arg, "[^%z\t-\v\r ]")
) then
invalid_args[param] = nil
end
end
return invalid_args
end
local function compare_params(a, b)
a, b = a[1], b[1]
local type_a = type(a)
if type_a == type(b) then
return a < b
end
return type_a == "number"
end
-- Convert `args` into an array of sorted PARAM=ARG strings, using the parameter
-- name as the sortkey, with numbered params sorted before strings.
local function args_to_sorted_tuples(args)
local msg, i = {}, 0
for k, v in pairs(args) do
i = i + 1
msg[i] = {k, v}
end
sort(msg, compare_params)
for j = 1, i do
msg[j] = concat(msg[j], "=")
end
return msg
end
local function apply_pre_tag(frame, invalid_args)
return frame:extensionTag("pre", concat(invalid_args, "\n"))
end
local function make_message(template_name, invalid_args, no_link)
local open, close
if no_link then
open, close = "", ""
else
open, close = "[[", "]]"
end
return "The template " .. open .. template_name .. close .. " does not use the parameter(s): " .. invalid_args .. " Please see " .. open .. "Module:checkparams" .. close .. " for help with this warning."
end
-- Called by non-Lua templates using "{{#invoke:checkparams|warn}}". `frame`
-- is checked for the following params:
-- `1=` (optional) a comma separated list of additional allowed parameters
-- `nowarn=` (optional) do not include preview warning in warning_text
-- `noattn=` (optional) do not include attention seeking span in in warning_text
function export.warn(frame)
local parent, frame_args = frame:getParent(), frame.args
local template_name = parent:getTitle()
local template_title = new_title(template_name)
local invalid_args = get_invalid_args(template_title, parent.args, frame_args[1])
-- If there are no invalid template args, return.
if not next(invalid_args) then
return ""
end
-- Otherwise, generate "Invalid params" warning to be inserted onto the
-- wiki page.
local warn, attn, cat
invalid_args = args_to_sorted_tuples(invalid_args)
-- Show warning in previewer.
if not frame_args.nowarn then
warn = tostring(html_create("sup")
:addClass("error")
:addClass("previewonly")
:tag("small")
:wikitext(make_message(template_name, apply_pre_tag(frame, invalid_args)))
:allDone())
end
-- Add attentionseeking message. <pre> tags don't work in HTML attributes,
-- so use semicolons as delimiters.
if not frame_args.noattn then
attn = tostring(html_create("span")
:addClass("attentionseeking")
:attr("title", make_message(template_name, concat(invalid_args, "; ") .. ".", "no_link"))
:allDone())
end
-- Categorize if neither the current page nor the template would go in a hidden maintenance category.
if not (uses_hidden_category(get_current_title()) or uses_hidden_category(template_title)) then
cat = format_categories({"Pages using invalid parameters when calling " .. template_name}, nil, "-", nil, "force_output")
end
return (warn or "") .. (attn or "") .. (cat or "")
end
-- Called by non-Lua templates using "{{#invoke:checkparams|error}}". `frame`
-- is checked for the following params:
-- `1=` (optional) a comma separated list of additional allowed parameters
function export.error(frame)
local parent = frame:getParent()
local template_name = parent:getTitle()
local invalid_args = get_invalid_args(new_title(template_name), parent.args, frame.args[1])
-- Use formatted_error, so that we can use <pre> tags in error messages:
-- any whitespace which isn't trimmed is treated as literal, so errors
-- caused by double-spaces or erroneous newlines in inputs need to be
-- displayed accurately.
if next(invalid_args) then
return formatted_error(make_message(
template_name,
apply_pre_tag(frame, args_to_sorted_tuples(invalid_args))
))
end
end
return export
docpcsyifqx2ojucdg6c3xbw6370gdr
193478
193477
2024-11-21T10:41:35Z
Lee
19
[[:en:Module:checkparams]] වෙතින් එක් සංශෝධනයක්
193477
Scribunto
text/plain
local export = {}
local debug_module = "Module:debug"
local maintenance_category_module = "Module:maintenance category"
local parameters_module = "Module:parameters"
local string_utilities_module = "Module:string utilities"
local template_parser_module = "Module:template parser"
local utilities_module = "Module:utilities"
local concat = table.concat
local get_current_title = mw.title.getCurrentTitle
local html_create = mw.html.create
local match = string.match
local new_title = mw.title.new
local next = next
local pairs = pairs
local require = require
local select = select
local sort = table.sort
local tostring = tostring
local type = type
--[==[
Loaders for functions in other modules, which overwrite themselves with the target function when called. This ensures modules are only loaded when needed, retains the speed/convenience of locally-declared pre-loaded functions, and has no overhead after the first call, since the target functions are called directly in any subsequent calls.]==]
local function find_parameters(...)
find_parameters = require(template_parser_module).find_parameters
return find_parameters(...)
end
local function format_categories(...)
format_categories = require(utilities_module).format_categories
return format_categories(...)
end
local function formatted_error(...)
formatted_error = require(debug_module).formatted_error
return formatted_error(...)
end
local function gsplit(...)
gsplit = require(string_utilities_module).gsplit
return gsplit(...)
end
local function process_params(...)
process_params = require(parameters_module).process
return process_params(...)
end
local function scribunto_param_key(...)
scribunto_param_key = require(string_utilities_module).scribunto_param_key
return scribunto_param_key(...)
end
local function uses_hidden_category(...)
uses_hidden_category = require(maintenance_category_module).uses_hidden_category
return uses_hidden_category(...)
end
-- Returns a table of all arguments in `template_args` which are not supported
-- by `template_title` or listed in `additional`.
local function get_invalid_args(template_title, template_args, additional)
local content = template_title:getContent()
if not content then
-- This should only be possible if the input frame has been tampered with.
error("Could not retrieve the page content of \"" .. template_title.prefixedText .. "\".")
end
local allowed_params, seen = {}, {}
-- Detect all params used by the parent template. param:get_name() takes the
-- parent frame arg table as an argument so that preprocessing will take
-- them into account, since it will matter if the name contains another
-- parameter (e.g. the outer param in "{{{foo{{{bar}}}baz}}}" will change
-- depending on the value for bar=). `seen` memoizes results based on the
-- raw parameter text (which is stored as a string in the parameter object),
-- which avoids unnecessary param:get_name() calls, which are non-trivial.
for param in find_parameters(content) do
local raw = param.raw
if not seen[raw] then
allowed_params[param:get_name(template_args)] = true
seen[raw] = true
end
end
-- If frame.args[1] contains a comma separated list of param names, add
-- those as well.
if additional then
for param in gsplit(additional, ",", true) do
-- scribunto_param_key normalizes the param into the form returned
-- by param:get_name() (i.e. trimmed and converted to a number if
-- appropriate).
allowed_params[scribunto_param_key(param)] = true
end
end
local invalid_args = select(2, process_params(
template_args,
allowed_params,
"return unknown"
))
if not next(invalid_args) then
return invalid_args
end
-- Some templates use params 1 and 3 without using 2, which means that 2
-- will be in the list of invalid args when used as an empty placeholder
-- (e.g. {{foo|foo||bar}}). Detect and remove any empty positional
-- placeholder args.
local max_pos = 0
for param in pairs(allowed_params) do
if type(param) == "number" and param > max_pos then
max_pos = param
end
end
for param, arg in pairs(invalid_args) do
if (
type(param) == "number" and
param >= 1 and
param < max_pos and
-- Ignore if arg is empty, or only contains chars trimmed by
-- MediaWiki when handling named parameters.
not match(arg, "[^%z\t-\v\r ]")
) then
invalid_args[param] = nil
end
end
return invalid_args
end
local function compare_params(a, b)
a, b = a[1], b[1]
local type_a = type(a)
if type_a == type(b) then
return a < b
end
return type_a == "number"
end
-- Convert `args` into an array of sorted PARAM=ARG strings, using the parameter
-- name as the sortkey, with numbered params sorted before strings.
local function args_to_sorted_tuples(args)
local msg, i = {}, 0
for k, v in pairs(args) do
i = i + 1
msg[i] = {k, v}
end
sort(msg, compare_params)
for j = 1, i do
msg[j] = concat(msg[j], "=")
end
return msg
end
local function apply_pre_tag(frame, invalid_args)
return frame:extensionTag("pre", concat(invalid_args, "\n"))
end
local function make_message(template_name, invalid_args, no_link)
local open, close
if no_link then
open, close = "", ""
else
open, close = "[[", "]]"
end
return "The template " .. open .. template_name .. close .. " does not use the parameter(s): " .. invalid_args .. " Please see " .. open .. "Module:checkparams" .. close .. " for help with this warning."
end
-- Called by non-Lua templates using "{{#invoke:checkparams|warn}}". `frame`
-- is checked for the following params:
-- `1=` (optional) a comma separated list of additional allowed parameters
-- `nowarn=` (optional) do not include preview warning in warning_text
-- `noattn=` (optional) do not include attention seeking span in in warning_text
function export.warn(frame)
local parent, frame_args = frame:getParent(), frame.args
local template_name = parent:getTitle()
local template_title = new_title(template_name)
local invalid_args = get_invalid_args(template_title, parent.args, frame_args[1])
-- If there are no invalid template args, return.
if not next(invalid_args) then
return ""
end
-- Otherwise, generate "Invalid params" warning to be inserted onto the
-- wiki page.
local warn, attn, cat
invalid_args = args_to_sorted_tuples(invalid_args)
-- Show warning in previewer.
if not frame_args.nowarn then
warn = tostring(html_create("sup")
:addClass("error")
:addClass("previewonly")
:tag("small")
:wikitext(make_message(template_name, apply_pre_tag(frame, invalid_args)))
:allDone())
end
-- Add attentionseeking message. <pre> tags don't work in HTML attributes,
-- so use semicolons as delimiters.
if not frame_args.noattn then
attn = tostring(html_create("span")
:addClass("attentionseeking")
:attr("title", make_message(template_name, concat(invalid_args, "; ") .. ".", "no_link"))
:allDone())
end
-- Categorize if neither the current page nor the template would go in a hidden maintenance category.
if not (uses_hidden_category(get_current_title()) or uses_hidden_category(template_title)) then
cat = format_categories({"Pages using invalid parameters when calling " .. template_name}, nil, "-", nil, "force_output")
end
return (warn or "") .. (attn or "") .. (cat or "")
end
-- Called by non-Lua templates using "{{#invoke:checkparams|error}}". `frame`
-- is checked for the following params:
-- `1=` (optional) a comma separated list of additional allowed parameters
function export.error(frame)
local parent = frame:getParent()
local template_name = parent:getTitle()
local invalid_args = get_invalid_args(new_title(template_name), parent.args, frame.args[1])
-- Use formatted_error, so that we can use <pre> tags in error messages:
-- any whitespace which isn't trimmed is treated as literal, so errors
-- caused by double-spaces or erroneous newlines in inputs need to be
-- displayed accurately.
if next(invalid_args) then
return formatted_error(make_message(
template_name,
apply_pre_tag(frame, args_to_sorted_tuples(invalid_args))
))
end
end
return export
docpcsyifqx2ojucdg6c3xbw6370gdr
Module:typing-aids/data
828
112610
193444
164028
2024-10-07T07:36:51Z
en>Svartava
0
193444
Scribunto
text/plain
local U = require("Module:string utilities").char
local stops = "PpBbTtDdKkGgQq"
local velars = "GgKk"
local diacritics = "_%^\'0"
local vowels = "AaEeIiOoUu"
local sonorants = "RrLlMmNn"
local not_laryngeal_numbers = "[^123₁₂₃]"
local ProtoGreekpalatalized = "TtDdLlNnRr"
local ProtoGreekaspirated = "PpTtKk"
local acute = U(0x0301)
local data = {}
data["all"] = {
["h1"] = "h₁",
["h2"] = "h₂",
["h3"] = "h₃",
["e1"] = "ə₁",
["e2"] = "ə₂",
["e3"] = "ə₃",
["e%-2"] = "ē₂",
["_w"] = "ʷ",
["%^w"] = "ʷ",
["_h"] = "ʰ",
["%^h"] = "ʰ",
["wh"] = { "ʷʰ", before = "["..velars.."]", after = not_laryngeal_numbers, },
["h"] = { "ʰ", before = "["..stops.."]", after = not_laryngeal_numbers, },
["w"] = { "ʷ", before = "["..velars.."]", },
["_e"] = "ₔ", -- sometimes used for the schwa secundum
["_"] = U(0x304), -- macron
["^"] = { U(0x302), before = "["..vowels.."]["..diacritics.."]?", }, -- circumflex
["\'"] = { U(0x301), before = "["..velars..vowels..sonorants.."]["..diacritics.."]?", }, -- acute
["0"] = { U(0x325), before = "["..sonorants.."]["..diacritics.."]?", }, -- ring below
["`"] = { U(0x328), before = "["..vowels.."]["..diacritics.."]?", }, -- ogonek
["t\'"] = "þ",
["T\'"] = "Þ",
["@"] = "ə",
["%^"] = { U(0x30C), before = "["..ProtoGreekpalatalized.."]", }, -- caron
["~"] = "⁓", -- swung dash
}
data["ine-pro"] = {
[1] = {
["h1"] = "h₁",
["h2"] = "h₂",
["h3"] = "h₃",
["e1"] = "ə₁",
["e2"] = "ə₂",
["e3"] = "ə₃",
["_w"] = "ʷ",
["%^w"] = "ʷ",
["_h"] = "ʰ",
["%^h"] = "ʰ",
["wh"] = { "ʷʰ", before = "["..velars.."]", after = not_laryngeal_numbers, },
["h"] = { "ʰ", after = not_laryngeal_numbers, },
["w"] = { "ʷ", before = "["..velars.."]", },
["_e"] = "ₔ", -- sometimes used for the schwa secundum
["'"] = { U(0x301), before = "["..velars..vowels..sonorants.."]["..diacritics.."]?", }, -- acute
["_"] = { U(0x304), before = "["..vowels.."]["..diacritics.."]?", }, -- macron
["0"] = { U(0x325), before = "["..sonorants.."]["..diacritics.."]?", }, -- ring below
["~"] = "⁓", -- swung dash
["%^"] = { U(0x311), before = "["..velars.."]", }, -- inverted breve above
},
[2] = {
["%^"] = { U(0x32F), before = "[iu]", }, -- inverted breve above
},
}
data["PIE"] = data["ine-pro"]
data["gem-pro"] = {
["e_2"] = "ē₂",
["`"] = { U(0x328), before = "["..vowels.."]["..diacritics.."]?", }, -- ogonek
["t\'"] = "þ",
["T\'"] = "Þ",
["_"] = { U(0x304), before = "["..vowels.."]["..diacritics.."]?", }, -- macron
["%^"] = { U(0x302), before = "["..vowels.."]["..diacritics.."]?", }, -- circumflex
}
data["PG"] = data["gem-pro"]
data["grk-pro"] = {
[1] = {
["_\'"] = { U(0x304) .. U(0x301), before = "["..vowels.."]", }, -- macron and acute
["\'_"] = { U(0x304) .. U(0x301), before = "["..vowels.."]", }, -- macron and acute
["hw"] = { "ʷʰ", before = "["..velars.."]", },
["wh"] = { "ʷʰ", before = "["..velars.."]", },
["\'"] = { U(0x30C), before = "["..ProtoGreekpalatalized.."]", }, -- caron
},
[2] = {
["%^"] = U(0x30C), -- caron
["@"] = "ə",
["_"] = { U(0x304), before = "["..vowels.."]["..diacritics.."]?", }, -- macron
["\'"] = { U(0x301), before = "["..velars..vowels..sonorants.."]["..diacritics.."]*", }, -- acute
["h"] = { "ʰ", before = "["..ProtoGreekaspirated.."]", },
["w"] = { "ʷ", before = "["..velars.."]", },
}
}
data["PGr"] = data ["grk-pro"]
data["ru"] = {
[1] = {
["Jo"] = "Ё",
["jo"] = "ё",
["Ju"] = "Ю",
["ju"] = "ю",
["Ja"] = "Я",
["ja"] = "я",
["C'"] = "Ч",
["c'"] = "ч",
["S'"] = "Ш",
["s'"] = "ш",
["j'"] = "й",
},
[2] = {
["A"] = "А",
["a"] = "а",
["B"] = "Б",
["b"] = "б",
["V"] = "В",
["v"] = "в",
["G"] = "Г",
["g"] = "г",
["D"] = "Д",
["d"] = "д",
["E"] = "Е",
["e"] = "е",
["Z'"] = "Ж",
["z'"] = "ж",
["Z"] = "З",
["z"] = "з",
["I"] = "И",
["i"] = "и",
["J"] = "Й",
["j"] = "й",
["K"] = "К",
["k"] = "к",
["L"] = "Л",
["l"] = "л",
["M"] = "М",
["m"] = "м",
["N"] = "Н",
["n"] = "н",
["O"] = "О",
["o"] = "о",
["P"] = "П",
["p"] = "п",
["R"] = "Р",
["r"] = "р",
["S"] = "С",
["s"] = "с",
["T"] = "Т",
["t"] = "т",
["U"] = "У",
["u"] = "у",
["F"] = "Ф",
["f"] = "ф",
["H"] = "Х",
["h"] = "х",
["C"] = "Ц",
["c"] = "ц",
["X"] = "Щ",
["x"] = "щ",
["``"] = "Ъ",
["`"] = "ъ",
["Y"] = "Ы",
["y"] = "ы",
["''"] = "Ь",
["'''"] = "ь",
["`E"] = "Э",
["`e"] = "э",
["/"] = U(0x301), -- acute
},
}
--[[
The shortcut (or regex search pattern) is enclosed in [""],
and the replacement is enclosed in quotes after the equals sign:
["shortcut"] = "replacement",
if the shortcut includes a parenthesis "()",
the replacement will contain a capture string "%1" or "%2",
which matches the contents of first or second parenthesis.
]]
data.acute_decomposer = {
["á"] = "a" .. acute,
["é"] = "e" .. acute,
["í"] = "i" .. acute,
["ó"] = "o" .. acute,
["ú"] = "u" .. acute,
["ý"] = "y" .. acute,
["ḗ"] = "ē" .. acute,
["ṓ"] = "ō" .. acute,
["Á"] = "A" .. acute,
["É"] = "E" .. acute,
["Í"] = "I" .. acute,
["Ó"] = "O" .. acute,
["Ú"] = "U" .. acute,
["Ý"] = "Y" .. acute,
["Ḗ"] = "Ē" .. acute,
["Ṓ"] = "Ō" .. acute,
}
--[=[
If table is an array, the first string is the subpage of
[[Module:typing-aids/data]] that contains the language's replacements; the
second is the index of the field in the exported table of that module that
contains the language's replacements.
Otherwise, the table contains fields for particular scripts, specifying the
module used when the |sc= parameter is set to that script code, as well as a
"default" field for cases where no script has been specified.
]=]
data.modules = {
["ae"] = { "ae", "ae", },
["ae-old"] = { "ae", "ae", },
["ae-yng"] = { "ae", "ae", },
["ae-tr"] = { "ae", "ae-tr", },
["akk"] = { "akk", "akk-tr" },
["ar"] = { "ar" },
["arc"] = { default = "Armi", Palm = "Palm" },
["arc-imp"] = { default = "Armi", Palm = "Palm" },
["arc-pal"] = { "Palm", "Palm"},
["awa"] = { "bho", "bho" },
["awa-tr"] = { "bho", "bho-tr" },
["bho"] = { "bho", "bho" },
["bho-tr"] = { "bho", "bho-tr" },
["cu"] = { "Cyrs" },
["fa"] = { "fa" },
["fa-cls"] = { "fa" },
["fa-ira"] = { "fa" },
["gmy"] = { "gmy" },
-- ["gmy-tr"] = { "gmy", "gmy-tr" },
["got"] = { "got", "got" },
["got-tr"] = { "got", "got-tr" },
["grc"] = { "grc" },
["hit"] = { "hit", "hit" },
["hit-tr"] = { "hit", "hit-tr" },
["hy"] = { "hy", "hy", },
["hy-tr"] = { "hy", "hy-tr", },
["ja"] = { "ja", "ja" },
["kn"] = { "kn", "kn" },
["kn-tr"] = { "kn", "kn-tr" },
["Mani-tr"] = { "Mani", "Mani-tr" },
["Narb"] = { "Narb", "Narb"},
["Narb-tr"] = { "Narb", "Narb-tr"},
["pal"] = { default = "Phlv", Phli = "Phli", Mani = "Mani" },
["phn"] = { "Phnx" },
["orv"] = { "Cyrs" },
["os"] = { "os" },
["os-dig"] = { "os" },
["os-iro"] = { "os" },
["otk"] = { "Orkh" },
["oty"] = { "oty" },
["peo"] = { "peo" },
["Phli-tr"] = { "Phli", "Phli-tr" },
["Prti-tr"] = { "Prti", "Prti-tr" },
["mai"] = { "mai", "mai" },
["mai-tr"] = { "mai", "mai-tr" },
["mwr"] = { "mwr", "mwr" },
["mwr-tr"] = { "mwr", "mwr-tr" },
["omr"] = { "omr", "omr" },
["omr-tr"] = { "omr", "omr-tr" },
["inc-ash"] = { "pra", "pra" },
["inc-ash-tr"] = { "pra", "pra-tr" },
["inc-kam"] = { "sa-Sidd", "sa-Sidd" },
["inc-kam-tr"] = { "sa-Sidd", "sa-Sidd-tr" },
["inc-oaw"] = { "bho", "bho" },
["inc-oaw-tr"] = { "bho", "bho-tr" },
["pra"] = { "pra", "pra" },
["pra-tr"] = { "pra", "pra-tr" },
["pra-Deva"] = { "pra-Deva", "pra-Deva" },
["pra-Deva-tr"] = { "pra-Deva", "pra-Deva-tr" },
["pra-Knda"] = { "pra-Knda", "pra-Knda" },
["pra-Knda-tr"] = { "pra-Knda", "pra-Knda-tr" },
["doi"] = { "doi", "doi" },
["doi-tr"] = { "doi", "doi-tr" },
["sa-Gujr"] = { "sa-Gujr", "sa-Gujr" },
["sa-Gujr-tr"] = { "sa-Gujr", "sa-Gujr-tr" },
["sa-Kthi"] = { "bho", "bho" },
["sa-Kthi-tr"] = { "bho", "bho-tr" },
["sa-Modi"] = { "sa-Modi", "sa-Modi" },
["sa-Modi-tr"] = { "sa-Modi", "sa-Modi-tr" },
["sa-Shrd"] = { "sa-Shrd", "sa-Shrd" },
["sa-Shrd-tr"] = { "sa-Shrd", "sa-Shrd-tr" },
["sa-Sidd"] = { "sa-Sidd", "sa-Sidd" },
["sa-Sidd-tr"] = { "sa-Sidd", "sa-Sidd-tr" },
["omr-Deva"] = { "omr-Deva", "omr-Deva" },
["omr-Deva-tr"] = { "omr-Deva", "omr-Deva-tr" },
["kho"] = { "psu", "psu" },
["sa"] = { "sa", "sa" },
["sa-tr"] = { "sa", "sa-tr" },
["Sarb"] = { "Sarb", "Sarb"},
["Sarb-tr"] = { "Sarb", "Sarb-tr"},
["saz"] = { "saz", "saz" },
["saz-tr"] = { "saz", "saz-tr" },
["sd"] = { "sd", "sd" },
["sd-tr"] = { "sd", "sd-tr" },
["sem-tha"] = { "Narb", "Narb" },
["sgh"] = { "sgh-Cyrl"},
["skr"] = { "skr", "skr" },
["skr-tr"] = { "skr", "skr-tr" },
["sog"] = { default = "Sogd", Mani = "Mani", Sogo = "Sogo" },
["Sogd-tr"] = { "Sogd", "Sogd-tr" },
["Sogo-tr"] = { "Sogo", "Sogo-tr" },
["sux"] = { "sux" },
["uga"] = { "Ugar" },
["xbc"] = { default = "el", Mani = "Mani" },
["xpr"] = { default = "Mani" },
["xco"] = { default = "Chrs" },
["xsa"] = { "Sarb", "Sarb" },
["yah"] = { "yah-Cyrl"},
-- [""] = { "" },
}
return data
74xulyyabocyqfg3d4frqzu9295ajj4
193445
193444
2024-11-21T10:31:26Z
Lee
19
[[:en:Module:typing-aids/data]] වෙතින් එක් සංශෝධනයක්
193444
Scribunto
text/plain
local U = require("Module:string utilities").char
local stops = "PpBbTtDdKkGgQq"
local velars = "GgKk"
local diacritics = "_%^\'0"
local vowels = "AaEeIiOoUu"
local sonorants = "RrLlMmNn"
local not_laryngeal_numbers = "[^123₁₂₃]"
local ProtoGreekpalatalized = "TtDdLlNnRr"
local ProtoGreekaspirated = "PpTtKk"
local acute = U(0x0301)
local data = {}
data["all"] = {
["h1"] = "h₁",
["h2"] = "h₂",
["h3"] = "h₃",
["e1"] = "ə₁",
["e2"] = "ə₂",
["e3"] = "ə₃",
["e%-2"] = "ē₂",
["_w"] = "ʷ",
["%^w"] = "ʷ",
["_h"] = "ʰ",
["%^h"] = "ʰ",
["wh"] = { "ʷʰ", before = "["..velars.."]", after = not_laryngeal_numbers, },
["h"] = { "ʰ", before = "["..stops.."]", after = not_laryngeal_numbers, },
["w"] = { "ʷ", before = "["..velars.."]", },
["_e"] = "ₔ", -- sometimes used for the schwa secundum
["_"] = U(0x304), -- macron
["^"] = { U(0x302), before = "["..vowels.."]["..diacritics.."]?", }, -- circumflex
["\'"] = { U(0x301), before = "["..velars..vowels..sonorants.."]["..diacritics.."]?", }, -- acute
["0"] = { U(0x325), before = "["..sonorants.."]["..diacritics.."]?", }, -- ring below
["`"] = { U(0x328), before = "["..vowels.."]["..diacritics.."]?", }, -- ogonek
["t\'"] = "þ",
["T\'"] = "Þ",
["@"] = "ə",
["%^"] = { U(0x30C), before = "["..ProtoGreekpalatalized.."]", }, -- caron
["~"] = "⁓", -- swung dash
}
data["ine-pro"] = {
[1] = {
["h1"] = "h₁",
["h2"] = "h₂",
["h3"] = "h₃",
["e1"] = "ə₁",
["e2"] = "ə₂",
["e3"] = "ə₃",
["_w"] = "ʷ",
["%^w"] = "ʷ",
["_h"] = "ʰ",
["%^h"] = "ʰ",
["wh"] = { "ʷʰ", before = "["..velars.."]", after = not_laryngeal_numbers, },
["h"] = { "ʰ", after = not_laryngeal_numbers, },
["w"] = { "ʷ", before = "["..velars.."]", },
["_e"] = "ₔ", -- sometimes used for the schwa secundum
["'"] = { U(0x301), before = "["..velars..vowels..sonorants.."]["..diacritics.."]?", }, -- acute
["_"] = { U(0x304), before = "["..vowels.."]["..diacritics.."]?", }, -- macron
["0"] = { U(0x325), before = "["..sonorants.."]["..diacritics.."]?", }, -- ring below
["~"] = "⁓", -- swung dash
["%^"] = { U(0x311), before = "["..velars.."]", }, -- inverted breve above
},
[2] = {
["%^"] = { U(0x32F), before = "[iu]", }, -- inverted breve above
},
}
data["PIE"] = data["ine-pro"]
data["gem-pro"] = {
["e_2"] = "ē₂",
["`"] = { U(0x328), before = "["..vowels.."]["..diacritics.."]?", }, -- ogonek
["t\'"] = "þ",
["T\'"] = "Þ",
["_"] = { U(0x304), before = "["..vowels.."]["..diacritics.."]?", }, -- macron
["%^"] = { U(0x302), before = "["..vowels.."]["..diacritics.."]?", }, -- circumflex
}
data["PG"] = data["gem-pro"]
data["grk-pro"] = {
[1] = {
["_\'"] = { U(0x304) .. U(0x301), before = "["..vowels.."]", }, -- macron and acute
["\'_"] = { U(0x304) .. U(0x301), before = "["..vowels.."]", }, -- macron and acute
["hw"] = { "ʷʰ", before = "["..velars.."]", },
["wh"] = { "ʷʰ", before = "["..velars.."]", },
["\'"] = { U(0x30C), before = "["..ProtoGreekpalatalized.."]", }, -- caron
},
[2] = {
["%^"] = U(0x30C), -- caron
["@"] = "ə",
["_"] = { U(0x304), before = "["..vowels.."]["..diacritics.."]?", }, -- macron
["\'"] = { U(0x301), before = "["..velars..vowels..sonorants.."]["..diacritics.."]*", }, -- acute
["h"] = { "ʰ", before = "["..ProtoGreekaspirated.."]", },
["w"] = { "ʷ", before = "["..velars.."]", },
}
}
data["PGr"] = data ["grk-pro"]
data["ru"] = {
[1] = {
["Jo"] = "Ё",
["jo"] = "ё",
["Ju"] = "Ю",
["ju"] = "ю",
["Ja"] = "Я",
["ja"] = "я",
["C'"] = "Ч",
["c'"] = "ч",
["S'"] = "Ш",
["s'"] = "ш",
["j'"] = "й",
},
[2] = {
["A"] = "А",
["a"] = "а",
["B"] = "Б",
["b"] = "б",
["V"] = "В",
["v"] = "в",
["G"] = "Г",
["g"] = "г",
["D"] = "Д",
["d"] = "д",
["E"] = "Е",
["e"] = "е",
["Z'"] = "Ж",
["z'"] = "ж",
["Z"] = "З",
["z"] = "з",
["I"] = "И",
["i"] = "и",
["J"] = "Й",
["j"] = "й",
["K"] = "К",
["k"] = "к",
["L"] = "Л",
["l"] = "л",
["M"] = "М",
["m"] = "м",
["N"] = "Н",
["n"] = "н",
["O"] = "О",
["o"] = "о",
["P"] = "П",
["p"] = "п",
["R"] = "Р",
["r"] = "р",
["S"] = "С",
["s"] = "с",
["T"] = "Т",
["t"] = "т",
["U"] = "У",
["u"] = "у",
["F"] = "Ф",
["f"] = "ф",
["H"] = "Х",
["h"] = "х",
["C"] = "Ц",
["c"] = "ц",
["X"] = "Щ",
["x"] = "щ",
["``"] = "Ъ",
["`"] = "ъ",
["Y"] = "Ы",
["y"] = "ы",
["''"] = "Ь",
["'''"] = "ь",
["`E"] = "Э",
["`e"] = "э",
["/"] = U(0x301), -- acute
},
}
--[[
The shortcut (or regex search pattern) is enclosed in [""],
and the replacement is enclosed in quotes after the equals sign:
["shortcut"] = "replacement",
if the shortcut includes a parenthesis "()",
the replacement will contain a capture string "%1" or "%2",
which matches the contents of first or second parenthesis.
]]
data.acute_decomposer = {
["á"] = "a" .. acute,
["é"] = "e" .. acute,
["í"] = "i" .. acute,
["ó"] = "o" .. acute,
["ú"] = "u" .. acute,
["ý"] = "y" .. acute,
["ḗ"] = "ē" .. acute,
["ṓ"] = "ō" .. acute,
["Á"] = "A" .. acute,
["É"] = "E" .. acute,
["Í"] = "I" .. acute,
["Ó"] = "O" .. acute,
["Ú"] = "U" .. acute,
["Ý"] = "Y" .. acute,
["Ḗ"] = "Ē" .. acute,
["Ṓ"] = "Ō" .. acute,
}
--[=[
If table is an array, the first string is the subpage of
[[Module:typing-aids/data]] that contains the language's replacements; the
second is the index of the field in the exported table of that module that
contains the language's replacements.
Otherwise, the table contains fields for particular scripts, specifying the
module used when the |sc= parameter is set to that script code, as well as a
"default" field for cases where no script has been specified.
]=]
data.modules = {
["ae"] = { "ae", "ae", },
["ae-old"] = { "ae", "ae", },
["ae-yng"] = { "ae", "ae", },
["ae-tr"] = { "ae", "ae-tr", },
["akk"] = { "akk", "akk-tr" },
["ar"] = { "ar" },
["arc"] = { default = "Armi", Palm = "Palm" },
["arc-imp"] = { default = "Armi", Palm = "Palm" },
["arc-pal"] = { "Palm", "Palm"},
["awa"] = { "bho", "bho" },
["awa-tr"] = { "bho", "bho-tr" },
["bho"] = { "bho", "bho" },
["bho-tr"] = { "bho", "bho-tr" },
["cu"] = { "Cyrs" },
["fa"] = { "fa" },
["fa-cls"] = { "fa" },
["fa-ira"] = { "fa" },
["gmy"] = { "gmy" },
-- ["gmy-tr"] = { "gmy", "gmy-tr" },
["got"] = { "got", "got" },
["got-tr"] = { "got", "got-tr" },
["grc"] = { "grc" },
["hit"] = { "hit", "hit" },
["hit-tr"] = { "hit", "hit-tr" },
["hy"] = { "hy", "hy", },
["hy-tr"] = { "hy", "hy-tr", },
["ja"] = { "ja", "ja" },
["kn"] = { "kn", "kn" },
["kn-tr"] = { "kn", "kn-tr" },
["Mani-tr"] = { "Mani", "Mani-tr" },
["Narb"] = { "Narb", "Narb"},
["Narb-tr"] = { "Narb", "Narb-tr"},
["pal"] = { default = "Phlv", Phli = "Phli", Mani = "Mani" },
["phn"] = { "Phnx" },
["orv"] = { "Cyrs" },
["os"] = { "os" },
["os-dig"] = { "os" },
["os-iro"] = { "os" },
["otk"] = { "Orkh" },
["oty"] = { "oty" },
["peo"] = { "peo" },
["Phli-tr"] = { "Phli", "Phli-tr" },
["Prti-tr"] = { "Prti", "Prti-tr" },
["mai"] = { "mai", "mai" },
["mai-tr"] = { "mai", "mai-tr" },
["mwr"] = { "mwr", "mwr" },
["mwr-tr"] = { "mwr", "mwr-tr" },
["omr"] = { "omr", "omr" },
["omr-tr"] = { "omr", "omr-tr" },
["inc-ash"] = { "pra", "pra" },
["inc-ash-tr"] = { "pra", "pra-tr" },
["inc-kam"] = { "sa-Sidd", "sa-Sidd" },
["inc-kam-tr"] = { "sa-Sidd", "sa-Sidd-tr" },
["inc-oaw"] = { "bho", "bho" },
["inc-oaw-tr"] = { "bho", "bho-tr" },
["pra"] = { "pra", "pra" },
["pra-tr"] = { "pra", "pra-tr" },
["pra-Deva"] = { "pra-Deva", "pra-Deva" },
["pra-Deva-tr"] = { "pra-Deva", "pra-Deva-tr" },
["pra-Knda"] = { "pra-Knda", "pra-Knda" },
["pra-Knda-tr"] = { "pra-Knda", "pra-Knda-tr" },
["doi"] = { "doi", "doi" },
["doi-tr"] = { "doi", "doi-tr" },
["sa-Gujr"] = { "sa-Gujr", "sa-Gujr" },
["sa-Gujr-tr"] = { "sa-Gujr", "sa-Gujr-tr" },
["sa-Kthi"] = { "bho", "bho" },
["sa-Kthi-tr"] = { "bho", "bho-tr" },
["sa-Modi"] = { "sa-Modi", "sa-Modi" },
["sa-Modi-tr"] = { "sa-Modi", "sa-Modi-tr" },
["sa-Shrd"] = { "sa-Shrd", "sa-Shrd" },
["sa-Shrd-tr"] = { "sa-Shrd", "sa-Shrd-tr" },
["sa-Sidd"] = { "sa-Sidd", "sa-Sidd" },
["sa-Sidd-tr"] = { "sa-Sidd", "sa-Sidd-tr" },
["omr-Deva"] = { "omr-Deva", "omr-Deva" },
["omr-Deva-tr"] = { "omr-Deva", "omr-Deva-tr" },
["kho"] = { "psu", "psu" },
["sa"] = { "sa", "sa" },
["sa-tr"] = { "sa", "sa-tr" },
["Sarb"] = { "Sarb", "Sarb"},
["Sarb-tr"] = { "Sarb", "Sarb-tr"},
["saz"] = { "saz", "saz" },
["saz-tr"] = { "saz", "saz-tr" },
["sd"] = { "sd", "sd" },
["sd-tr"] = { "sd", "sd-tr" },
["sem-tha"] = { "Narb", "Narb" },
["sgh"] = { "sgh-Cyrl"},
["skr"] = { "skr", "skr" },
["skr-tr"] = { "skr", "skr-tr" },
["sog"] = { default = "Sogd", Mani = "Mani", Sogo = "Sogo" },
["Sogd-tr"] = { "Sogd", "Sogd-tr" },
["Sogo-tr"] = { "Sogo", "Sogo-tr" },
["sux"] = { "sux" },
["uga"] = { "Ugar" },
["xbc"] = { default = "el", Mani = "Mani" },
["xpr"] = { default = "Mani" },
["xco"] = { default = "Chrs" },
["xsa"] = { "Sarb", "Sarb" },
["yah"] = { "yah-Cyrl"},
-- [""] = { "" },
}
return data
74xulyyabocyqfg3d4frqzu9295ajj4
Module:typing-aids
828
112611
193442
164030
2024-10-07T07:38:02Z
en>Svartava
0
193442
Scribunto
text/plain
local export = {}
local m_data = mw.loadData("Module:typing-aids/data")
local m_string_utils = require("Module:string utilities")
local reorderDiacritics = require("Module:grc-utilities").reorderDiacritics
local template_link = require("Module:template parser").templateLink
local listToSet = require("Module:table").listToSet
--[=[
Other data modules:
-- [[Module:typing-aids/data/ar]]
-- [[Module:typing-aids/data/fa]]
-- [[Module:typing-aids/data/gmy]]
-- [[Module:typing-aids/data/grc]]
-- [[Module:typing-aids/data/hit]]
-- [[Module:typing-aids/data/hy]]
-- [[Module:typing-aids/data/sa]]
-- [[Module:typing-aids/data/sux]]
-- [[Module:typing-aids/data/got]]
-- [[Module:typing-aids/data/pra]]
--]=]
local U = m_string_utils.char
local gsub = m_string_utils.gsub
local find = m_string_utils.find
local toNFC = mw.ustring.toNFC
local toNFD = mw.ustring.toNFD
local acute = U(0x0301)
local macron = U(0x0304)
local function load_or_nil(module_name)
local success, module = pcall(mw.loadData, module_name)
if success then
return module
end
end
-- Try to load a list of modules. Return the first successfully loaded module
-- and its name.
local function get_module_and_title(...)
for i = 1, select("#", ...) do
local module_name = select(i, ...)
if module_name then
local module = load_or_nil(module_name)
if module then
return module, module_name
end
end
end
end
local function clone_args(frame)
local args = frame.getParent and frame:getParent().args or frame
local newargs = {}
for k, v in pairs(args) do
if v ~= "" then
newargs[k] = v
end
end
return newargs
end
local function tag(text, lang)
if lang and not find(lang, "%-tr$") then
return '<span lang="' .. lang .. '">' .. text .. '</span>'
else
return text
end
end
local acute_decomposer
-- compose Latin text, then decompose into sequences of letter and combining
-- accent, either partly or completely depending on the language.
local function compose_decompose(text, lang)
if lang == "sa" or lang == "hy" or lang == "xcl" or lang == "kn" or lang == "inc-ash" or lang == "inc-kam" or lang == "inc-oaw" or lang == "pra" or lang == "omr" or lang == "mai" or lang == "saz" or lang == "sd" or lang == "mwr" or lang == "skr" or lang == "pra-Knda" or lang == "pra-Deva" or lang == "doi" or lang == "sa-Gujr" or lang == "sa-Modi" or lang == "sa-Shrd" or lang == "sa-Sidd" or lang == "omr-Deva" or lang == "bho" then
acute_decomposer = acute_decomposer or m_data.acute_decomposer
text = toNFC(text)
text = gsub(text, ".", acute_decomposer)
else
text = toNFD(text)
end
return text
end
local function do_one_replacement(text, from, to, before, after)
-- FIXME! These won't work properly if there are any captures in FROM.
if before then
from = "(" .. before .. ")" .. from
to = "%1" .. to
end
if after then
from = from .. "(" .. after .. ")"
to = to .. (before and "%2" or "%1")
end
text = gsub(text, from, to) -- discard second retval
return text
end
local function do_key_value_replacement_table(text, tab)
for from, repl in pairs(tab) do
local to, before, after
if type(repl) == "string" then
to = repl
else
to = repl[1]
before = repl.before
after = repl.after
end
text = do_one_replacement(text, from, to, before, after)
end
-- FIXME, why is this being done here after each table?
text = mw.text.trim(text)
return text
end
local function do_replacements(text, repls)
if repls[1] and repls[1][1] then
-- new-style list
for _, from_to in ipairs(repls) do
text = do_one_replacement(text, from_to[1], from_to[2], from_to.before, from_to.after)
end
text = mw.text.trim(text)
elseif repls[1] then
for _, repl_table in ipairs(repls) do
text = do_key_value_replacement_table(text, repl_table)
end
else
text = do_key_value_replacement_table(text, repls)
end
return text
end
local function get_replacements(lang, script)
local module_data = m_data.modules[lang]
local replacements_module
if not module_data then
replacements_module = m_data
else
local success
local resolved_name = "Module:typing-aids/data/"
.. (module_data[1] or module_data[script] or module_data.default)
replacements_module = load_or_nil(resolved_name)
if not replacements_module then
error("Data module " .. resolved_name
.. " specified in 'modules' table of [[Module:typing-aids/data]] does not exist.")
end
end
local replacements
if not module_data then
if lang then
replacements = replacements_module[lang]
else
replacements = replacements_module.all
end
elseif module_data[2] then
replacements = replacements_module[module_data[2]]
else
replacements = replacements_module
end
return replacements
end
local function interpret_shortcuts(text, origlang, script, untouchedDiacritics, moduleName)
if not text or type(text) ~= "string" then
return nil
end
local lang = origlang
if lang == "xcl" then lang = "hy" end
local replacements = moduleName and load_or_nil("Module:typing-aids/data/" .. moduleName)
or get_replacements(lang, script)
or error("The language code \"" .. tostring(origlang) ..
"\" does not have a set of replacements in Module:typing-aids/data or its submodules.")
-- Hittite transliteration must operate on composed letters, because it adds
-- diacritics to Basic Latin letters: s -> š, for instance.
if lang ~= "hit-tr" then
text = compose_decompose(text, lang)
end
if lang == "ae" or lang == "bho" or lang == "sa" or lang == "got" or lang == "hy" or lang == "xcl" or lang == "kn" or lang == "inc-ash" or lang == "inc-kam" or lang == "pra" or lang == "pal" or lang == "sog" or lang == "xpr" or lang == "omr" or lang == "mai" or lang == "saz" or lang == "sd" or lang == "mwr" or lang == "skr" or lang == "pra-Knda" or lang == "pra-Deva" or lang == "doi" or lang == "sa-Gujr" or lang == "sa-Modi" or lang == "sa-Shrd" or lang == "sa-Sidd" or lang == "inc-oaw" or lang == "omr-Deva" then
local transliterationTable = get_replacements(lang .. "-tr")
or script and get_replacements(script .. "-tr")
if not transliterationTable then
error("No transliteration table for " .. lang .. "-tr" .. (script and (" or " .. script .. "-tr") or " and no script has been provided"))
end
text = do_replacements(text, transliterationTable)
text = compose_decompose(text, lang)
text = do_replacements(text, replacements)
else
text = do_replacements(text, replacements)
if lang == "grc" and not untouchedDiacritics then
text = reorderDiacritics(text)
end
end
return text
end
export.interpret_shortcuts = interpret_shortcuts
local function hyphen_separated_replacements(text, lang)
local module = mw.loadData("Module:typing-aids/data/" .. lang)
local replacements = module[lang] or module
if not replacements then
error("??")
end
text = text:gsub("<sup>(.-)</sup>%-?", "%1-")
if replacements.pre then
for k, v in pairs(replacements.pre) do
text = gsub(text, k, v)
end
end
local output = {}
-- Find groups of characters that aren't hyphens or whitespace.
for symbol in text:gmatch("([^%-%s]+)") do
table.insert(output, replacements[symbol] or symbol)
end
return table.concat(output)
end
local function add_parameter(list, args, key, content)
if not content then content = args[key] end
args[key] = nil
if not content then return false end
if find(content, "=") or type(key) == "string" then
table.insert(list, key .. "=" .. content)
else
while list.maxarg < key - 1 do
table.insert(list, "")
list.maxarg = list.maxarg + 1
end
table.insert(list, content)
list.maxarg = key
end
return true
end
local function add_and_convert_parameter(list, args, key, altkey1, altkey2, trkey, lang, scriptKey)
if altkey1 and args[altkey1] then
add_and_convert_parameter(list, args, key, nil, nil, nil, lang, scriptKey)
key = altkey1
elseif altkey2 and args[altkey2] then
add_and_convert_parameter(list, args, key, nil, nil, nil, lang, scriptKey)
key = altkey2
end
local content = args[key]
if trkey and args[trkey] then
if not content then
content = args[trkey]
args[trkey] = nil
else
if args[trkey] ~= "-" then
error("Can't specify manual translit " .. trkey .. "=" ..
args[trkey] .. " along with parameter " .. key .. "=" .. content)
end
end
end
if not content then return false end
local trcontent = nil
-- If Sanskrit or Prakrit or Kannada and there's an acute accent specified somehow or other
-- in the source content, preserve the translit, which includes the
-- accent when the Devanagari doesn't.
if lang == "sa" or lang == "kn" or lang == "inc-ash" or lang == "inc-kam" or lang == "pra" or lang == "omr" or lang == "mai" or lang == "saz" or lang == "sd" or lang == "mwr" or lang == "skr" or lang == "pra-Knda" or lang == "pra-Deva" or lang == "doi" or lang == "sa-Gujr" or lang == "sa-Modi" or lang == "sa-Shrd" or lang == "sa-Sidd" or lang == "inc-oaw" or lang == "omr-Deva" or lang == "bho" then
local proposed_trcontent = interpret_shortcuts(content, lang .. "-tr")
if find(proposed_trcontent, acute) then
trcontent = proposed_trcontent
end
end
-- If Gothic and there's a macron specified somehow or other
-- in the source content that remains after canonicalization, preserve
-- the translit, which includes the accent when the Gothic doesn't.
if lang == "got" then
local proposed_trcontent = interpret_shortcuts(content, "got-tr")
if find(proposed_trcontent, macron) then
trcontent = proposed_trcontent
end
end
--[[
if lang == "gmy" then
local proposed_trcontent = interpret_shortcuts(content, "gmy-tr")
if find(proposed_trcontent, macron) then
trcontent = proposed_trcontent
end
end
--]]
local converted_content
if lang == "hit" or lang == "akk" then
trcontent = interpret_shortcuts(content, lang .. "-tr")
converted_content = hyphen_separated_replacements(content, lang)
elseif lang == "sux" or lang == "gmy" then
converted_content = hyphen_separated_replacements(content, lang)
elseif lang == "pal" or lang == "sog" or lang == "xpr" then
local script = args[scriptKey] or m_data.modules[lang].default
local script_object = require "Module:scripts".getByCode(script)
local proposed_trcontent = interpret_shortcuts(content, script .. "-tr")
local auto_tr = (require "Module:languages".getByCode(lang)
:transliterate(converted_content, script_object))
if proposed_trcontent ~= auto_tr then
trcontent = proposed_trcontent
end
converted_content = interpret_shortcuts(content, lang, script, nil, args.module)
else
converted_content = interpret_shortcuts(content, lang, args[scriptKey], nil, args.module)
end
add_parameter(list, args, key, converted_content)
if trcontent then
add_parameter(list, args, trkey, trcontent)
end
return true
end
local is_compound = listToSet{ "affix", "af", "compound", "com", "suffix", "suf", "prefix", "pre", "con", "confix", "surf" }
-- Technically lang, ux, and uxi aren't link templates, but they have many of the same parameters.
local is_link_template = listToSet{
"m", "m+", "langname-mention", "l", "ll",
"cog", "noncog", "cognate", "ncog", "nc", "noncognate", "cog+",
"m-self", "l-self",
"alter", "alt", "syn",
"alt sp", "alt form",
"alternative spelling of", "alternative form of",
"desc", "desctree", "lang", "usex", "ux", "uxi"
}
local is_two_lang_link_template = listToSet{ "der", "inh", "bor", "slbor", "lbor", "calque", "cal", "translit", "inh+", "bor+" }
local is_trans_template = listToSet{ "t", "t+", "t-check", "t+check" }
local function print_template(args)
local parameters = {}
for key, value in pairs(args) do
parameters[key] = value
end
local template = parameters[1]
local result = { }
local lang = nil
result.maxarg = 0
add_parameter(result, parameters, 1)
lang = parameters[2]
add_parameter(result, parameters, 2)
if is_link_template[template] then
add_and_convert_parameter(result, parameters, 3, "alt", 4, "tr", lang, "sc")
for _, param in ipairs({ 5, "gloss", "t" }) do
add_parameter(result, parameters, param)
end
elseif is_two_lang_link_template[template] then
lang = parameters[3]
add_parameter(result, parameters, 3)
add_and_convert_parameter(result, parameters, 4, "alt", 5, "tr", lang, "sc")
for _, param in ipairs({ 6, "gloss", "t" }) do
add_parameter(result, parameters, param)
end
elseif is_trans_template[template] then
add_and_convert_parameter(result, parameters, 3, "alt", nil, "tr", lang, "sc")
local i = 4
while true do
if not parameters[i] then
break
end
add_parameter(result, parameters, i)
end
elseif is_compound[template] then
local i = 1
while true do
local sawparam = add_and_convert_parameter(result, parameters, i + 2, "alt" .. i, nil, "tr" .. i, lang, "sc")
if not sawparam then
break
end
for _, param in ipairs({ "id", "lang", "sc", "t", "pos", "lit" }) do
add_parameter(result, parameters, param .. i)
end
i = i + 1
end
else
error("Unrecognized template name '" .. template .. "'")
end
-- Copy any remaining parameters
for k in pairs(parameters) do
add_parameter(result, parameters, k)
end
return "{{" .. table.concat(result, "|") .. "}}"
end
function export.link(frame)
local args = frame.args or frame
return print_template(args)
end
function export.replace(frame)
local args = clone_args(frame)
local text, lang
if args[4] or args[3] or args.tr then
return print_template(args)
else
if args[2] then
lang, text = args[1], args[2]
else
lang, text = "all", args[1]
end
end
if lang == "akk" or lang == "gmy" or lang == "hit" or lang == "sux" then
return hyphen_separated_replacements(text, lang)
else
text = interpret_shortcuts(text, lang, args.sc, args.noreorder, args.module)
end
return text or ""
end
function export.example(frame)
local args = clone_args(frame)
local text, lang
if args[2] then
lang, text = args[1], args[2]
else
lang, text = "all", args[1]
end
local textparam
if find(text, "=") then
textparam = "2="..text -- Currently, "=" is only used in the shortcuts for Greek, and Greek is always found in the second parameter, since the first parameter specify the language, "grc".
else
textparam = text
end
local template = {
lang ~= "all" and lang or textparam,
lang ~= "all" and textparam or nil,
}
local output = { template_link("subst:chars", template) }
table.insert(output, "\n| ")
table.insert(output, lang ~= "all" and "<span lang=\""..lang.."\">" or "")
table.insert(output, export.replace({lang, text}))
table.insert(output, lang ~= "all" and "</span>" or "")
return table.concat(output)
end
function export.examples(frame)
local args = frame.getParent and frame:getParent().args or frame.args[1] and frame.args or frame
local examples = args[1] and mw.text.split(args[1], ";%s+") or error('No content in the first parameter.')
local lang = args["lang"]
local output = {
[[
{| class="wikitable"
! shortcut !! result
]]
}
local row = [[
|-
| templateCode || result
]]
for _, example in pairs(examples) do
local textparam
if find(example, "=") then
textparam = "2=" .. example -- Currently, "=" is only used in the shortcuts for Greek, and Greek is always found in the second parameter, since the first parameter specify the language, "grc".
else
textparam = example
end
local template = {
lang or textparam,
lang and textparam,
}
local result = export.replace{lang, example}
local content = {
templateCode = template_link("subst:chars", template),
result = tag(result, lang),
}
local function addContent(item)
if content[item] then
return content[item]
else
return 'No content for "' .. item .. '".'
end
end
local row = gsub(row, "%a+", addContent)
table.insert(output, row)
end
return table.concat(output) .. "|}"
end
return export
rlerxmekniu3z87shkn5hkp0sqrihb8
193443
193442
2024-11-21T10:31:07Z
Lee
19
[[:en:Module:typing-aids]] වෙතින් එක් සංශෝධනයක්
193442
Scribunto
text/plain
local export = {}
local m_data = mw.loadData("Module:typing-aids/data")
local m_string_utils = require("Module:string utilities")
local reorderDiacritics = require("Module:grc-utilities").reorderDiacritics
local template_link = require("Module:template parser").templateLink
local listToSet = require("Module:table").listToSet
--[=[
Other data modules:
-- [[Module:typing-aids/data/ar]]
-- [[Module:typing-aids/data/fa]]
-- [[Module:typing-aids/data/gmy]]
-- [[Module:typing-aids/data/grc]]
-- [[Module:typing-aids/data/hit]]
-- [[Module:typing-aids/data/hy]]
-- [[Module:typing-aids/data/sa]]
-- [[Module:typing-aids/data/sux]]
-- [[Module:typing-aids/data/got]]
-- [[Module:typing-aids/data/pra]]
--]=]
local U = m_string_utils.char
local gsub = m_string_utils.gsub
local find = m_string_utils.find
local toNFC = mw.ustring.toNFC
local toNFD = mw.ustring.toNFD
local acute = U(0x0301)
local macron = U(0x0304)
local function load_or_nil(module_name)
local success, module = pcall(mw.loadData, module_name)
if success then
return module
end
end
-- Try to load a list of modules. Return the first successfully loaded module
-- and its name.
local function get_module_and_title(...)
for i = 1, select("#", ...) do
local module_name = select(i, ...)
if module_name then
local module = load_or_nil(module_name)
if module then
return module, module_name
end
end
end
end
local function clone_args(frame)
local args = frame.getParent and frame:getParent().args or frame
local newargs = {}
for k, v in pairs(args) do
if v ~= "" then
newargs[k] = v
end
end
return newargs
end
local function tag(text, lang)
if lang and not find(lang, "%-tr$") then
return '<span lang="' .. lang .. '">' .. text .. '</span>'
else
return text
end
end
local acute_decomposer
-- compose Latin text, then decompose into sequences of letter and combining
-- accent, either partly or completely depending on the language.
local function compose_decompose(text, lang)
if lang == "sa" or lang == "hy" or lang == "xcl" or lang == "kn" or lang == "inc-ash" or lang == "inc-kam" or lang == "inc-oaw" or lang == "pra" or lang == "omr" or lang == "mai" or lang == "saz" or lang == "sd" or lang == "mwr" or lang == "skr" or lang == "pra-Knda" or lang == "pra-Deva" or lang == "doi" or lang == "sa-Gujr" or lang == "sa-Modi" or lang == "sa-Shrd" or lang == "sa-Sidd" or lang == "omr-Deva" or lang == "bho" then
acute_decomposer = acute_decomposer or m_data.acute_decomposer
text = toNFC(text)
text = gsub(text, ".", acute_decomposer)
else
text = toNFD(text)
end
return text
end
local function do_one_replacement(text, from, to, before, after)
-- FIXME! These won't work properly if there are any captures in FROM.
if before then
from = "(" .. before .. ")" .. from
to = "%1" .. to
end
if after then
from = from .. "(" .. after .. ")"
to = to .. (before and "%2" or "%1")
end
text = gsub(text, from, to) -- discard second retval
return text
end
local function do_key_value_replacement_table(text, tab)
for from, repl in pairs(tab) do
local to, before, after
if type(repl) == "string" then
to = repl
else
to = repl[1]
before = repl.before
after = repl.after
end
text = do_one_replacement(text, from, to, before, after)
end
-- FIXME, why is this being done here after each table?
text = mw.text.trim(text)
return text
end
local function do_replacements(text, repls)
if repls[1] and repls[1][1] then
-- new-style list
for _, from_to in ipairs(repls) do
text = do_one_replacement(text, from_to[1], from_to[2], from_to.before, from_to.after)
end
text = mw.text.trim(text)
elseif repls[1] then
for _, repl_table in ipairs(repls) do
text = do_key_value_replacement_table(text, repl_table)
end
else
text = do_key_value_replacement_table(text, repls)
end
return text
end
local function get_replacements(lang, script)
local module_data = m_data.modules[lang]
local replacements_module
if not module_data then
replacements_module = m_data
else
local success
local resolved_name = "Module:typing-aids/data/"
.. (module_data[1] or module_data[script] or module_data.default)
replacements_module = load_or_nil(resolved_name)
if not replacements_module then
error("Data module " .. resolved_name
.. " specified in 'modules' table of [[Module:typing-aids/data]] does not exist.")
end
end
local replacements
if not module_data then
if lang then
replacements = replacements_module[lang]
else
replacements = replacements_module.all
end
elseif module_data[2] then
replacements = replacements_module[module_data[2]]
else
replacements = replacements_module
end
return replacements
end
local function interpret_shortcuts(text, origlang, script, untouchedDiacritics, moduleName)
if not text or type(text) ~= "string" then
return nil
end
local lang = origlang
if lang == "xcl" then lang = "hy" end
local replacements = moduleName and load_or_nil("Module:typing-aids/data/" .. moduleName)
or get_replacements(lang, script)
or error("The language code \"" .. tostring(origlang) ..
"\" does not have a set of replacements in Module:typing-aids/data or its submodules.")
-- Hittite transliteration must operate on composed letters, because it adds
-- diacritics to Basic Latin letters: s -> š, for instance.
if lang ~= "hit-tr" then
text = compose_decompose(text, lang)
end
if lang == "ae" or lang == "bho" or lang == "sa" or lang == "got" or lang == "hy" or lang == "xcl" or lang == "kn" or lang == "inc-ash" or lang == "inc-kam" or lang == "pra" or lang == "pal" or lang == "sog" or lang == "xpr" or lang == "omr" or lang == "mai" or lang == "saz" or lang == "sd" or lang == "mwr" or lang == "skr" or lang == "pra-Knda" or lang == "pra-Deva" or lang == "doi" or lang == "sa-Gujr" or lang == "sa-Modi" or lang == "sa-Shrd" or lang == "sa-Sidd" or lang == "inc-oaw" or lang == "omr-Deva" then
local transliterationTable = get_replacements(lang .. "-tr")
or script and get_replacements(script .. "-tr")
if not transliterationTable then
error("No transliteration table for " .. lang .. "-tr" .. (script and (" or " .. script .. "-tr") or " and no script has been provided"))
end
text = do_replacements(text, transliterationTable)
text = compose_decompose(text, lang)
text = do_replacements(text, replacements)
else
text = do_replacements(text, replacements)
if lang == "grc" and not untouchedDiacritics then
text = reorderDiacritics(text)
end
end
return text
end
export.interpret_shortcuts = interpret_shortcuts
local function hyphen_separated_replacements(text, lang)
local module = mw.loadData("Module:typing-aids/data/" .. lang)
local replacements = module[lang] or module
if not replacements then
error("??")
end
text = text:gsub("<sup>(.-)</sup>%-?", "%1-")
if replacements.pre then
for k, v in pairs(replacements.pre) do
text = gsub(text, k, v)
end
end
local output = {}
-- Find groups of characters that aren't hyphens or whitespace.
for symbol in text:gmatch("([^%-%s]+)") do
table.insert(output, replacements[symbol] or symbol)
end
return table.concat(output)
end
local function add_parameter(list, args, key, content)
if not content then content = args[key] end
args[key] = nil
if not content then return false end
if find(content, "=") or type(key) == "string" then
table.insert(list, key .. "=" .. content)
else
while list.maxarg < key - 1 do
table.insert(list, "")
list.maxarg = list.maxarg + 1
end
table.insert(list, content)
list.maxarg = key
end
return true
end
local function add_and_convert_parameter(list, args, key, altkey1, altkey2, trkey, lang, scriptKey)
if altkey1 and args[altkey1] then
add_and_convert_parameter(list, args, key, nil, nil, nil, lang, scriptKey)
key = altkey1
elseif altkey2 and args[altkey2] then
add_and_convert_parameter(list, args, key, nil, nil, nil, lang, scriptKey)
key = altkey2
end
local content = args[key]
if trkey and args[trkey] then
if not content then
content = args[trkey]
args[trkey] = nil
else
if args[trkey] ~= "-" then
error("Can't specify manual translit " .. trkey .. "=" ..
args[trkey] .. " along with parameter " .. key .. "=" .. content)
end
end
end
if not content then return false end
local trcontent = nil
-- If Sanskrit or Prakrit or Kannada and there's an acute accent specified somehow or other
-- in the source content, preserve the translit, which includes the
-- accent when the Devanagari doesn't.
if lang == "sa" or lang == "kn" or lang == "inc-ash" or lang == "inc-kam" or lang == "pra" or lang == "omr" or lang == "mai" or lang == "saz" or lang == "sd" or lang == "mwr" or lang == "skr" or lang == "pra-Knda" or lang == "pra-Deva" or lang == "doi" or lang == "sa-Gujr" or lang == "sa-Modi" or lang == "sa-Shrd" or lang == "sa-Sidd" or lang == "inc-oaw" or lang == "omr-Deva" or lang == "bho" then
local proposed_trcontent = interpret_shortcuts(content, lang .. "-tr")
if find(proposed_trcontent, acute) then
trcontent = proposed_trcontent
end
end
-- If Gothic and there's a macron specified somehow or other
-- in the source content that remains after canonicalization, preserve
-- the translit, which includes the accent when the Gothic doesn't.
if lang == "got" then
local proposed_trcontent = interpret_shortcuts(content, "got-tr")
if find(proposed_trcontent, macron) then
trcontent = proposed_trcontent
end
end
--[[
if lang == "gmy" then
local proposed_trcontent = interpret_shortcuts(content, "gmy-tr")
if find(proposed_trcontent, macron) then
trcontent = proposed_trcontent
end
end
--]]
local converted_content
if lang == "hit" or lang == "akk" then
trcontent = interpret_shortcuts(content, lang .. "-tr")
converted_content = hyphen_separated_replacements(content, lang)
elseif lang == "sux" or lang == "gmy" then
converted_content = hyphen_separated_replacements(content, lang)
elseif lang == "pal" or lang == "sog" or lang == "xpr" then
local script = args[scriptKey] or m_data.modules[lang].default
local script_object = require "Module:scripts".getByCode(script)
local proposed_trcontent = interpret_shortcuts(content, script .. "-tr")
local auto_tr = (require "Module:languages".getByCode(lang)
:transliterate(converted_content, script_object))
if proposed_trcontent ~= auto_tr then
trcontent = proposed_trcontent
end
converted_content = interpret_shortcuts(content, lang, script, nil, args.module)
else
converted_content = interpret_shortcuts(content, lang, args[scriptKey], nil, args.module)
end
add_parameter(list, args, key, converted_content)
if trcontent then
add_parameter(list, args, trkey, trcontent)
end
return true
end
local is_compound = listToSet{ "affix", "af", "compound", "com", "suffix", "suf", "prefix", "pre", "con", "confix", "surf" }
-- Technically lang, ux, and uxi aren't link templates, but they have many of the same parameters.
local is_link_template = listToSet{
"m", "m+", "langname-mention", "l", "ll",
"cog", "noncog", "cognate", "ncog", "nc", "noncognate", "cog+",
"m-self", "l-self",
"alter", "alt", "syn",
"alt sp", "alt form",
"alternative spelling of", "alternative form of",
"desc", "desctree", "lang", "usex", "ux", "uxi"
}
local is_two_lang_link_template = listToSet{ "der", "inh", "bor", "slbor", "lbor", "calque", "cal", "translit", "inh+", "bor+" }
local is_trans_template = listToSet{ "t", "t+", "t-check", "t+check" }
local function print_template(args)
local parameters = {}
for key, value in pairs(args) do
parameters[key] = value
end
local template = parameters[1]
local result = { }
local lang = nil
result.maxarg = 0
add_parameter(result, parameters, 1)
lang = parameters[2]
add_parameter(result, parameters, 2)
if is_link_template[template] then
add_and_convert_parameter(result, parameters, 3, "alt", 4, "tr", lang, "sc")
for _, param in ipairs({ 5, "gloss", "t" }) do
add_parameter(result, parameters, param)
end
elseif is_two_lang_link_template[template] then
lang = parameters[3]
add_parameter(result, parameters, 3)
add_and_convert_parameter(result, parameters, 4, "alt", 5, "tr", lang, "sc")
for _, param in ipairs({ 6, "gloss", "t" }) do
add_parameter(result, parameters, param)
end
elseif is_trans_template[template] then
add_and_convert_parameter(result, parameters, 3, "alt", nil, "tr", lang, "sc")
local i = 4
while true do
if not parameters[i] then
break
end
add_parameter(result, parameters, i)
end
elseif is_compound[template] then
local i = 1
while true do
local sawparam = add_and_convert_parameter(result, parameters, i + 2, "alt" .. i, nil, "tr" .. i, lang, "sc")
if not sawparam then
break
end
for _, param in ipairs({ "id", "lang", "sc", "t", "pos", "lit" }) do
add_parameter(result, parameters, param .. i)
end
i = i + 1
end
else
error("Unrecognized template name '" .. template .. "'")
end
-- Copy any remaining parameters
for k in pairs(parameters) do
add_parameter(result, parameters, k)
end
return "{{" .. table.concat(result, "|") .. "}}"
end
function export.link(frame)
local args = frame.args or frame
return print_template(args)
end
function export.replace(frame)
local args = clone_args(frame)
local text, lang
if args[4] or args[3] or args.tr then
return print_template(args)
else
if args[2] then
lang, text = args[1], args[2]
else
lang, text = "all", args[1]
end
end
if lang == "akk" or lang == "gmy" or lang == "hit" or lang == "sux" then
return hyphen_separated_replacements(text, lang)
else
text = interpret_shortcuts(text, lang, args.sc, args.noreorder, args.module)
end
return text or ""
end
function export.example(frame)
local args = clone_args(frame)
local text, lang
if args[2] then
lang, text = args[1], args[2]
else
lang, text = "all", args[1]
end
local textparam
if find(text, "=") then
textparam = "2="..text -- Currently, "=" is only used in the shortcuts for Greek, and Greek is always found in the second parameter, since the first parameter specify the language, "grc".
else
textparam = text
end
local template = {
lang ~= "all" and lang or textparam,
lang ~= "all" and textparam or nil,
}
local output = { template_link("subst:chars", template) }
table.insert(output, "\n| ")
table.insert(output, lang ~= "all" and "<span lang=\""..lang.."\">" or "")
table.insert(output, export.replace({lang, text}))
table.insert(output, lang ~= "all" and "</span>" or "")
return table.concat(output)
end
function export.examples(frame)
local args = frame.getParent and frame:getParent().args or frame.args[1] and frame.args or frame
local examples = args[1] and mw.text.split(args[1], ";%s+") or error('No content in the first parameter.')
local lang = args["lang"]
local output = {
[[
{| class="wikitable"
! shortcut !! result
]]
}
local row = [[
|-
| templateCode || result
]]
for _, example in pairs(examples) do
local textparam
if find(example, "=") then
textparam = "2=" .. example -- Currently, "=" is only used in the shortcuts for Greek, and Greek is always found in the second parameter, since the first parameter specify the language, "grc".
else
textparam = example
end
local template = {
lang or textparam,
lang and textparam,
}
local result = export.replace{lang, example}
local content = {
templateCode = template_link("subst:chars", template),
result = tag(result, lang),
}
local function addContent(item)
if content[item] then
return content[item]
else
return 'No content for "' .. item .. '".'
end
end
local row = gsub(row, "%a+", addContent)
table.insert(output, row)
end
return table.concat(output) .. "|}"
end
return export
rlerxmekniu3z87shkn5hkp0sqrihb8
ප්රවර්ගය:ඉංග්රීසි යෙදුම්, උපසර්ග අනුව
14
113079
193573
165265
2024-11-21T11:04:19Z
Lee
19
Lee විසින් [[ප්රවර්ගය:ඉංග්රීසි terms by prefix]] සිට [[ප්රවර්ගය:ඉංග්රීසි යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
165263
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:භාෂාව අනුව යෙදුම්, උපසර්ග අනුව
14
113085
193563
165277
2024-11-21T10:56:34Z
Lee
19
Lee විසින් [[ප්රවර්ගය:භාෂාව අනුව Terms by prefix]] සිට [[ප්රවර්ගය:භාෂාව අනුව යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
165275
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:ජපන් යෙදුම්, උපසර්ග අනුව
14
125171
193567
192632
2024-11-21T10:58:14Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Japanese terms by prefix]] සිට [[ප්රවර්ගය:ජපන් යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
192631
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:Classical ඉන්දුනීසියානු
14
125183
193337
192656
2024-11-20T12:08:02Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Classical Indonesian]] සිට [[ප්රවර්ගය:Classical ඉන්දුනීසියානු]] වෙත පිටුව ගෙන යන ලදී
192655
wikitext
text/x-wiki
{{auto cat|lect=1}}
ertee7cys9kcm61xm19ibbz6xr74sxz
ප්රවර්ගය:ජපන් කන්ජි, ゐ ලෙස කියවන
14
125185
193530
192660
2024-11-21T10:49:30Z
Pinthura
2424
Pinthura විසින් [[ප්රවර්ගය:Japanese kanji read as ゐ]] සිට [[ප්රවර්ගය:ජපන් කන්ජි, ゐ ලෙස කියවන]] වෙත පිටුව ගෙන යන ලදී: සේවා: නව ප්රවර්ග නාමය වෙත ගෙනයාම.
192659
wikitext
text/x-wiki
{{auto cat|histconsol=い}}
q2vrwqzaew6kgok4zc657d2hx2gevt5
193533
193530
2024-11-21T10:49:50Z
Pinthura
2424
සේවා: ඉංග්රීසි ව්යාපෘතිය වෙත සබැඳියක් එක් කිරීම.
193533
wikitext
text/x-wiki
{{auto cat|histconsol=い}}
[[en:Category:Japanese kanji read as ゐ]]
iahpbl60aitra3i5oijsclzjr6mu2yv
ප්රවර්ගය:Brazilian පෘතුගීසි
14
125210
193333
192711
2024-11-20T12:06:03Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Brazilian Portuguese]] සිට [[ප්රවර්ගය:Brazilian පෘතුගීසි]] වෙත පිටුව ගෙන යන ලදී
192710
wikitext
text/x-wiki
{{auto cat|lect=1}}
ertee7cys9kcm61xm19ibbz6xr74sxz
ප්රවර්ගය:Bahian පෘතුගීසි
14
125215
193331
192721
2024-11-20T12:05:04Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Bahian Portuguese]] සිට [[ප්රවර්ගය:Bahian පෘතුගීසි]] වෙත පිටුව ගෙන යන ලදී
192720
wikitext
text/x-wiki
{{auto cat|lect=1}}
ertee7cys9kcm61xm19ibbz6xr74sxz
ප්රවර්ගය:Contemporary ලතින්
14
125222
193355
192735
2024-11-20T12:32:15Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Contemporary Latin]] සිට [[ප්රවර්ගය:Contemporary ලතින්]] වෙත පිටුව ගෙන යන ලදී
192734
wikitext
text/x-wiki
{{auto cat|lect=1|def=Latin since c. 1900|cat=New Latin|type=extant}}
bs63a9876md9qu1f8v70cr1akosifkp
193357
193355
2024-11-20T12:32:43Z
Lee
19
193357
wikitext
text/x-wiki
{{auto cat|lect=1|def=Latin since c. 1900|cat=New ලතින්|type=extant}}
d5gu0dfarvoizxn288ix1l9xhwmdvu2
ප්රවර්ගය:Venetian භාෂාව
14
125437
193388
193234
2024-11-21T08:30:49Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193388
wikitext
text/x-wiki
{{category redirect|Venetan භාෂාව}}
koe4m8wivpdxb2sygdgwywelql4l7bp
ප්රවර්ගය:Venetan language
14
125439
193389
193238
2024-11-21T08:30:59Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193389
wikitext
text/x-wiki
{{category redirect|Venetan භාෂාව}}
koe4m8wivpdxb2sygdgwywelql4l7bp
ප්රවර්ගය:Latin terms by suffix
14
125447
193390
193278
2024-11-21T08:31:09Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193390
wikitext
text/x-wiki
{{category redirect|ලතින් යෙදුම්, ප්රත්ය අනුව}}
1r7w4v5sgnb243et4udwtp65bz9pbry
ප්රවර්ගය:Latin terms by etymology
14
125452
193391
193288
2024-11-21T08:31:19Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193391
wikitext
text/x-wiki
{{category redirect|ලතින් යෙදුම්, නිරුක්තිය අනුව}}
ls6eir2zicntlt8ofvjjdpfqflskvsb
ප්රවර්ගය:Latin relational adjectives
14
125464
193392
193317
2024-11-21T08:31:29Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193392
wikitext
text/x-wiki
{{category redirect|ලතින් relational adjectives}}
p5ryfmulh4z7cjm2gpeck8u5qnnhob4
ප්රවර්ගය:Latin 3-syllable words
14
125465
193393
193319
2024-11-21T08:31:39Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193393
wikitext
text/x-wiki
{{category redirect|ලතින් 3-syllable words}}
6p2anqqt30nnzf6feesklha5w2rclpe
Module:labels/data/lang/kix
828
125466
193325
2024-11-11T11:56:29Z
en>Surjection
0
Protected "[[Module:labels/data/lang/kix]]": Highly visible template/module ([Edit=Allow only autoconfirmed users] (indefinite) [Move=Allow only autoconfirmed users] (indefinite))
193325
Scribunto
text/plain
local labels = {}
labels["Patsho"] = {
aliases = { "Pathso" },
Wikipedia = true,
regional_categories = true,
}
return require("Module:labels").finalize_data(labels)
tn23pzitw0vtxvqfy3onohjluzcundi
193326
193325
2024-11-20T11:59:27Z
Lee
19
[[:en:Module:labels/data/lang/kix]] වෙතින් එක් සංශෝධනයක්
193325
Scribunto
text/plain
local labels = {}
labels["Patsho"] = {
aliases = { "Pathso" },
Wikipedia = true,
regional_categories = true,
}
return require("Module:labels").finalize_data(labels)
tn23pzitw0vtxvqfy3onohjluzcundi
ප්රවර්ගය:Bahian Portuguese
14
125467
193332
2024-11-20T12:05:06Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Bahian Portuguese]] සිට [[ප්රවර්ගය:Bahian පෘතුගීසි]] වෙත පිටුව ගෙන යන ලදී
193332
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:Bahian පෘතුගීසි]]
lzo81l2uz1xhhberfoeu2nbjdavtyq0
193394
193332
2024-11-21T08:31:49Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193394
wikitext
text/x-wiki
{{category redirect|Bahian පෘතුගීසි}}
t9qenv81vg1hicmt6pt0u6u4c1remxr
ප්රවර්ගය:Brazilian Portuguese
14
125468
193334
2024-11-20T12:06:04Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Brazilian Portuguese]] සිට [[ප්රවර්ගය:Brazilian පෘතුගීසි]] වෙත පිටුව ගෙන යන ලදී
193334
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:Brazilian පෘතුගීසි]]
1c1mhqvma0amkk6x4xitapqhgvjcmzl
193395
193334
2024-11-21T08:31:59Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193395
wikitext
text/x-wiki
{{category redirect|Brazilian පෘතුගීසි}}
dmvponbv18vcvwgpj7runetyctg5b2j
ප්රවර්ගය:Classical Indonesian
14
125469
193338
2024-11-20T12:08:03Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Classical Indonesian]] සිට [[ප්රවර්ගය:Classical ඉන්දුනීසියානු]] වෙත පිටුව ගෙන යන ලදී
193338
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:Classical ඉන්දුනීසියානු]]
1oyp4j1xp7gmko0habbz6vc3u3v3blj
193396
193338
2024-11-21T08:32:09Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193396
wikitext
text/x-wiki
{{category redirect|Classical ඉන්දුනීසියානු}}
exp509nhwehjtyxlvyjoz9py8q0yt06
ප්රවර්ගය:Post-classical ලතින්
14
125470
193343
2024-09-17T08:53:51Z
en>WingerBot
0
rename dialect= to lect= in {{auto cat}}
193343
wikitext
text/x-wiki
{{auto cat|lect=1|def=[[Latin]] as used after the Classical period (c. 100 {{BCE}} — 200 {{CE}})|noreg=1}}
icleojf21r28szfoihp1yrh69nz5y2b
193344
193343
2024-11-20T12:12:33Z
Lee
19
[[:en:Category:Post-classical_Latin]] වෙතින් එක් සංශෝධනයක්
193343
wikitext
text/x-wiki
{{auto cat|lect=1|def=[[Latin]] as used after the Classical period (c. 100 {{BCE}} — 200 {{CE}})|noreg=1}}
icleojf21r28szfoihp1yrh69nz5y2b
193349
193344
2024-11-20T12:22:02Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Post-classical Latin]] සිට [[ප්රවර්ගය:Post-classical ලතින්]] වෙත පිටුව ගෙන යන ලදී
193343
wikitext
text/x-wiki
{{auto cat|lect=1|def=[[Latin]] as used after the Classical period (c. 100 {{BCE}} — 200 {{CE}})|noreg=1}}
icleojf21r28szfoihp1yrh69nz5y2b
ප්රවර්ගය:Varieties of ලතින්
14
125471
193345
2023-09-24T05:24:10Z
en>WingerBot
0
Created page with "{{auto cat}}"
193345
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193346
193345
2024-11-20T12:13:08Z
Lee
19
[[:en:Category:Varieties_of_Latin]] වෙතින් එක් සංශෝධනයක්
193345
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193347
193346
2024-11-20T12:14:26Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Varieties of Latin]] සිට [[ප්රවර්ගය:Varieties of ලතින්]] වෙත පිටුව ගෙන යන ලදී
193345
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:Varieties of Latin
14
125472
193348
2024-11-20T12:14:26Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Varieties of Latin]] සිට [[ප්රවර්ගය:Varieties of ලතින්]] වෙත පිටුව ගෙන යන ලදී
193348
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:Varieties of ලතින්]]
tplxdkhxnggtoae0irzjo1w6u1l0mkt
193397
193348
2024-11-21T08:32:19Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193397
wikitext
text/x-wiki
{{category redirect|Varieties of ලතින්}}
qsvq0evgn84dt6widrvo5tthfihbv16
ප්රවර්ගය:Post-classical Latin
14
125473
193350
2024-11-20T12:22:03Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Post-classical Latin]] සිට [[ප්රවර්ගය:Post-classical ලතින්]] වෙත පිටුව ගෙන යන ලදී
193350
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:Post-classical ලතින්]]
tqbx1v5k9cv6hry139h2a93zhenlphf
193398
193350
2024-11-21T08:32:29Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193398
wikitext
text/x-wiki
{{category redirect|Post-classical ලතින්}}
t59pqvmpt6dtz0u1f7oaybs69cj9zut
සැකිල්ල:BCE
10
125474
193351
2024-04-27T10:57:57Z
en>SurjectionBot
0
Protected "[[Template:BCE]]": (bot) automatically protect highly visible templates/modules (reference score: 1971+ >= 1000) ([Edit=Allow only autoconfirmed users] (indefinite) [Move=Allow only autoconfirmed users] (indefinite))
193351
wikitext
text/x-wiki
{{B.C.E.|nodots=1}}<noinclude>{{documentation}}</noinclude>
oqdmxedsyypjux89q02zoa6buchhwep
193352
193351
2024-11-20T12:22:36Z
Lee
19
[[:en:Template:BCE]] වෙතින් එක් සංශෝධනයක්
193351
wikitext
text/x-wiki
{{B.C.E.|nodots=1}}<noinclude>{{documentation}}</noinclude>
oqdmxedsyypjux89q02zoa6buchhwep
සැකිල්ල:BCE/documentation
10
125475
193353
2022-03-25T16:12:30Z
en>Sgconlaw
0
Updated documentation
193353
wikitext
text/x-wiki
{{documentation subpage}}
===Usage===
Use this template to display the abbreviation {{BCE}} (before the [[Common Era]]). The template links to an explanation of the term at [[Appendix:Glossary]].
The template displays the abbreviation without full stops or periods, and is equivalent to {{temp|B.C.E.|nodots=1}}. To have the abbreviation display full stops or periods, use {{temp|B.C.E.}}.
There is a [[Special:Preferences#mw-prefsection-gadgets|gadget]] that will turn this template and {{temp|CE}} into the [[B.C.]]/[[A.D.]] format.
===See also===
* {{temp|CE}}
* {{temp|C.E.}}
===Technical information===
This template relies on {{temp|B.C.E.}}
<includeonly>
[[Category:Dating templates]]
[[Category:Text format templates]]
</includeonly>
0olj45h83f5ub5ujw1acn7cahui6qgg
193354
193353
2024-11-20T12:22:55Z
Lee
19
[[:en:Template:BCE/documentation]] වෙතින් එක් සංශෝධනයක්
193353
wikitext
text/x-wiki
{{documentation subpage}}
===Usage===
Use this template to display the abbreviation {{BCE}} (before the [[Common Era]]). The template links to an explanation of the term at [[Appendix:Glossary]].
The template displays the abbreviation without full stops or periods, and is equivalent to {{temp|B.C.E.|nodots=1}}. To have the abbreviation display full stops or periods, use {{temp|B.C.E.}}.
There is a [[Special:Preferences#mw-prefsection-gadgets|gadget]] that will turn this template and {{temp|CE}} into the [[B.C.]]/[[A.D.]] format.
===See also===
* {{temp|CE}}
* {{temp|C.E.}}
===Technical information===
This template relies on {{temp|B.C.E.}}
<includeonly>
[[Category:Dating templates]]
[[Category:Text format templates]]
</includeonly>
0olj45h83f5ub5ujw1acn7cahui6qgg
ප්රවර්ගය:Contemporary Latin
14
125476
193356
2024-11-20T12:32:17Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Contemporary Latin]] සිට [[ප්රවර්ගය:Contemporary ලතින්]] වෙත පිටුව ගෙන යන ලදී
193356
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:Contemporary ලතින්]]
4mx6we47xp06ke167dnr5s4nvdu24qc
193399
193356
2024-11-21T08:32:39Z
Pinthura
2424
රොබෝ: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම
193399
wikitext
text/x-wiki
{{category redirect|Contemporary ලතින්}}
ljk4ej5pn2pgtkaiqjahloizarfs8o6
සංස්කාරක
0
125477
193358
2024-11-20T13:10:32Z
Lee
19
'== සිංහල == === නිරුක්තිය === {{rfe|si}} === නාම පදය === {{si-noun}} # {{rfdef|si}} <!-- ==== පරිවර්තන ==== {{trans-top|පරිවර්තන}} * ඉංග්රීසි: {{t|en|<<ඉංග්රීසි වචනය>>}} {{trans-bottom}} === අමතර අවධානයට === * {{l|si|<<ආශ්රිත පවතින වෙනත් ව...' යොදමින් නව පිටුවක් තනන ලදි
193358
wikitext
text/x-wiki
== සිංහල ==
=== නිරුක්තිය ===
{{rfe|si}}
=== නාම පදය ===
{{si-noun}}
# {{rfdef|si}}
<!--
==== පරිවර්තන ====
{{trans-top|පරිවර්තන}}
* ඉංග්රීසි: {{t|en|<<ඉංග්රීසි වචනය>>}}
{{trans-bottom}}
=== අමතර අවධානයට ===
* {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}}
-->
37g9s13iio6w9i2yw7aw46v28viw7h4
合
0
125478
193359
2024-11-20T13:14:38Z
Lee
19
නිර්මාණය
193359
wikitext
text/x-wiki
{{also|台|🈴|閤|会|슴}}
{{character info}}
==Translingual==
{{stroke order|strokes=6}}
===Han character===
{{Han char|rn=30|rad=口|as=03|sn=6|four=80601|canj=OMR|ids=⿱亼口}}
====Derived characters====
* {{lang-lite|mul|[[𫡥]], [[佮]], [[冾]], [[哈]], [[垥]], [[姶]], [[峆]], [[帢]], [[㢵]], [[㣛]], [[恰]], [[拾]], [[洽]], [[𪯤]], [[𰃣]], [[䏩]], [[㭘]], [[烚]], [[𬌗]], [[珨]], [[䢔]], [[䀫]], [[硆]], [[祫]], [[秴]], [[粭]], [[給]] ([[给]]), [[耠]], [[𦕲]], [[䑪]], [[蛤]], [[袷]], [[鿘]], [[詥]], [[𧳇]], [[䞩]], [[跲]], [[鉿]] ([[铪]]), [[鞈]], [[韐]], [[餄]] ([[饸]]), [[鿢]], [[𩩂]], [[𩳋]], [[鮯]] ([[𫚗]]), [[𪑇]], [[䶎]], [[𪘁]], [[㓣]], [[𪠁]], [[郃]], [[㧱]], [[敆]], [[㪉]], [[欱]], [[𤙖]], [[翖]], [[䧻]], [[頜]] ([[颌]]), [[鴿]] ([[鸽]]), [[搿]]}}
* {{lang-lite|mul|[[𣭝]], [[㝓]], [[峇]], [[𰂅]], [[𣆗]], [[荅]], [[䆟]], [[答]], [[𠷡]], [[䨐]], [[𩭆]], [[䶀]], [[𮯚]], [[鿖]], [[𡋛]], [[𪦻]], [[弇]], [[𢙅]], [[拿]], [[𤥓]], [[畣]], [[㿯]], [[盒]], [[𥅽]], [[翕]], [[搻]], [[樖]], [[盫]], [[龕]] ([[龛]]), [[𡄬]], [[匌]], [[匼]], [[㕉]], [[𢈈]], [[㾑]], [[閤]] ([[𬮤]]), [[𦒈]], [[𡇞]]}}
====References====
{{Han ref|kx=0174.240|dkj=03287|dj=0387.170|hdz=10581.010|uh=5408}}
==Chinese==
{{zh-wp|zh}}
===Glyph origin===
{{Han etym}}
{{Han compound|亼|口|t2=mouth|ls=ic}} : Two mouths speaking together. See also [[會]].
===Etymology 1===
{{zh-forms}}
From {{inh-lite|zh|sit-pro|*gap ~ kap}}. Compare {{och-l|蓋|to cover}} and {{och-l|盍|to unite}} (STEDT). Outside of Sinitic, possibly cognate with {{cog-lite|my|sc=Mymr|ကပ်|tr=kap|t=to approach}}.
====Pronunciation====
{{zh-pron
|m=hé
|m-s=ho2
|c=hap6
|c-t=hap5,gap4
|c-t_note=gap4 - in {{zh-l|三合|tr=-|t=Sanhe, Taishan}}
|g=hot7
|h=pfs=ha̍p/kap/kak;gd=hab6
|j=hah5
|md=hăk
|mn=ml,jj,tw,ph:ha̍p/ml,jj,tw,ph:ha̍h/zz:a̍h
|mn_note=ha̍p - literary (“to close; to join; to add up to; classifier for matching sets”); ha̍h/a̍h - vernacular (“to suit; to get along well”)
|mn-t=hah8/hab8/gah4
|mn-t_note=hab8 - alternative pronunciation for "to close"; gah4 - "to fit", etc.
|w=sh:8gheq,7keq
|w_note=hheq (T5) - regular pronunciation for "together", etc; keq (T4) - alternative pronunciation in certain words like 合算
|mc=1
|oc=2,2
|ma=
|cat=v,a,n,pn
}}
====Definitions====
{{head|zh|hanzi}}
# to [[close]]; to [[shut]]
#: {{zh-x|合上|close}}
# to [[join]]; to [[combine]]; to [[unite]]; to [[bring]] [[together]]
#: {{zh-x|組合|to assemble}}
#: {{zh-x|集合|to gather, to assemble}}
# to [[suit]]; to [[fit]]
#: {{zh-x|適合|to fit, to suit}}
#: {{zh-x|合適|suitable, fitting}}
#: {{zh-x|魚 和 紅酒 不合。|Fish and red wine don't '''go well together'''.}}
#: {{zh-x|那 個 店 裡 沒有 合 我 尺寸 的 帽子。|There are no hats in that store that '''fit''' me.}}
# to have [[sexual intercourse]]
# to [[fight]], to have a [[confrontation]] with
# to [[add up]] to; to be [[equivalent]] to; to [[amount]] to
#: {{zh-x|一 公頃 合 十五 市畝。|A hectare '''is equivalent to''' 15 mu.}}
# [[spouse]]
# [[whole]]; [[entire]]
# [[together]]
#: {{zh-x|這 張 卡片 是 我們 全家 合 送 的。祝 您 母親節 快樂{lè}。|This card comes from the '''whole''' family. Happy Mother's Day.}}
# {{†}} {{alt form|zh|盒|t=box}}
# {{lb|zh|game|battle}} [[round]]
# {{lb|zh|astronomy}} [[conjunction]]
# {{lb|zh|Hokkien}} to [[get along]] well
# {{lb|zh|Chinese phonetics}} {{zh-short|合口|closed-mouthed}}
# {{lb|zh|Xiamen}} {{zh-classifier|matching sets of instruments such as teaware}}
# {{†}} {{zh-classifier|number of fights}}
# {{†}} to [[compound]]; to [[make up]] {{gl|medicine, etc.}}
#* {{zh-x|又 合 狂藥,令 人 服 之,父子 兄弟 不 相 知識,唯 以 殺害 為{wéi} 事。|ref=Weishu}}
#* {{zh-x|誰 敢 合 毒藥 與 你?這廝 好 大膽 也。|ref=Dou E Yuan}}
#* {{zh-x|吾 家 ^@葛 ^@巾 娘子,手 合 鴆湯,其 速 飲!|ref=Liaozhai}}
# {{surname|zh}}
====Compounds====
{{col3|zh|三合|不合|九合|交合|付合|偶合|公合|六合|分合|切合|勘合|化合|匡合|匯合|印合|只合|合上|合下|合伙|合作|合併|合傳|合兒|合刃|合券|合刻|合劑|合力|合十|合口|合同|合唱|合圍|合夥|合奏|合婚|合子|合宅|合宜|合家|合局|合巹|合并|合度|合式|合弦|合影|合後|合從|合意|合成|合戰|合手|合折|合抱|合拍|合掌|合撲|合擊|合攏|合數|合族|合昏|合是|合時|合朔|合本|合板|合格|合機|合款|合歡|合殺|合氣|合沓|合法|合注|合流|合準|合溜|合演|合火|合照|合營|合獨|合球|合理|合璧|合生|合用|合當|合眼|合租|合窆|合符|合算|合約|合縱|合纖|合群|合聲|合股|合苦|合著|合葉|合葬|合計|合該|合謀|合變|合谷|合資|合身|合轍|合辦|合造|合遝|合適|合醵|合金|合鏡|合面|合音|合頭|合體|合髻|合龍|吻合|和合|咬合|四合|回合|場合|夜合|契合|好合|密合|寡合|對合|巧合|布合|愈合|成合|打合|投合|折合|拌合|捏合|捻合|接合|揉合|摻合|撮合|攙合|整合|暌合|暗合|會合|比合|沓合|混合|湊合|溶合|烏合|熱合|牉合|牽合|理合|瓦合|當合|癒合|百合|相合|砌合|符合|簇合|糅合|糾合|納合|索合|組合|結合|統合|綜合|綴合|縫合|縮合|總合|耦合|聚合|聯合|苟合|融合|蟻合|複合|訢合|說合|調合|跑合|迎合|迴合|連合|遇合|適合|配合|重合|野合|鈿合|關合|集合|離合|鬥合|鳩合|黏合|齧合|久合垸|三合院|三合垸|合心|拼合|三合會|志同道合|媾合|合時宜|固拉合瑪|合浦|西合休|阿合雅|合肥|nan:袂合<tr:bē ha̍h>|吾合沙魯}}
====Descendants====
{{CJKV||||hợp}}
===Etymology 2===
{{zh-forms}}
====Pronunciation====
{{zh-pron
|m=gě
<!--
|m-s=go2
|m-s_note=substitution character
-->
|c=gap3
|h=pfs=khap
|mn=ml,jj,tw:kap
|mn_note=literary reading
|mn-t=hah8
|mc=2
|oc=1,1
|cat=n
}}
====Definitions====
{{head|zh|hanzi}}
# unit of volume, equal to one tenth of a {{zh-l|升}}
## {{lb|zh|Han dynasty}} equal to 2 {{zh-l|龠}}
===Etymology 3===
{{zh-forms}}
====Pronunciation====
{{zh-pron
|m=hé
|c=ho4
|h=
|mn=xm:hōⁿ
|mn-t=ho5
|cat=n
}}
====Definitions====
{{head|zh|hanzi}}
# {{lb|zh|music}} {{ng|[[Kunqu]] [[gongche]] [[notation]] for the [[note]] [[low sol]] (5̣).}}
# {{lb|zh|music}} {{ng|[[Cantonese opera]] [[gongche]] [[notation]] for the [[note]] [[low sol]] (5̣).}}
====Compounds====
{{col3|zh|合尺}}
====Derived terms====
{{col3|zh|𪛗|yue:佮<tr:ho4>}}
===Etymology 4===
{{zh-forms}}
====Pronunciation====
{{zh-pron
|mn=kap/kah
|mn_note=kap - literary; kah - vernacular
|mn-t=gah4
|cat=v,mn:conj,mn-t:conj,mn:prep,mn-t:prep
}}
====Definitions====
{{head|zh|hanzi}}
# {{lb|zh|Hokkien|Teochew}} {{alt form|zh|佮|tr=-|t=[[and]]; [[with]]}}
===Etymology 5===
{{zh-forms|alt=敆}}
====Pronunciation====
{{zh-pron
|mn=kap
|mn-t=gab4
|mn-t_note=gab4 - "to combine" (e.g. 合藥)
|cat=v
}}
====Definitions====
{{head|zh|hanzi}}
# {{lb|zh|Hokkien|Teochew}} to [[wrap]] a [[book]] {{gl|with paper, etc.}}
# {{lb|zh|Hokkien|TCM}} to have a [[traditional Chinese medicine]] [[prescription]] [[filled]] {{gl|of the patient}}
# {{lb|zh|Mainland|_|Hokkien}} to [[embrace]] in one's [[bosom]]
# {{lb|zh|Mainland|_|Hokkien|figurative}} to [[watch over]]; to [[look after]]; to [[take care]] of {{gl|a child, etc.}}
# {{lb|zh|Taiwan Hokkien}} to [[match]]; to [[go]] with
# {{lb|zh|Teochew}} to [[concoct]] {{gl|to make medicine, etc.}}
# {{lb|zh|Teochew}} to [[form]] a [[partnership]]
# {{lb|zh|Teochew}} to [[bind]] {{gl|into a book}}
=====Compounds=====
{{col3|zh|合澉仔|合簀仔|合冊|合藥|合帳|合縫|合線|合房|合萬|合萬櫃|合喙|合逝|合食|合錢|合磨|合跤合手|合合磨}}
===Etymology 5===
{{zh-see|閤|s|to [[close]], to [[shut]]; [[all of]], [[whole]] of}}
===Etymology 6===
{{zh-see|盒|ss}}
===References===
* {{R:yue:Hanzi}}
{{cat|cmn|Elementary Mandarin}}
==Japanese==
===Kanji===
{{ja-kanji|grade=2|rs=口03}}
# [[fit]]
# [[suit]]
# [[join]]
# one [[tenth]]
====Readings====
From {{der|ja|ltc|-}} {{ltc-l|合|id=1}}; compare {{cog|cmn|合|tr=hé, gě}}:
{{ja-readings
|goon=ごう<ごふ
|kanon=こう<かふ
|kanyoon=かっ, がっ, ごう<がふ
}}
From {{der|ja|ltc|-}} {{ltc-l|合|id=2}}; compare {{cog|cmn|合|tr=gě}}:
{{ja-readings
|goon=こう<こふ
|kanon=こう<かふ
}}
From native {{w|Japanese_language|Japanese}} roots:
{{ja-readings
|kun=あ-う<あ-ふ, あ-わす<あ-はす, あ-わせる<あ-はせる, あ-い<あ-ひ, あつ-まる, あつ-める, あわい-<あはひ-, あ-わさる<あ-はさる, あわ-せ<あは-せ
|nanori=あい, あう, かい, はる, よし
}}
====Compounds====
* {{ja-r|合%憎|あい%にく|gloss=[[unfortunately]]}}
* {{ja-r|連%合|れん%ごう|gloss=[[union]], [[alliance]], [[combination]]}}
* {{ja-r|総%合|そう%ごう|gloss=synthesis, consolidation}}
===Etymology 1===
{{ja-kanjitab|ごう|yomi=o}}
====Counter====
{{ja-pos|counter|hhira=がふ|ごう}}
# covered [[container]]s
# [[battle]]s
====Noun====
{{ja-noun|ごう|hhira=がふ}}
# 0.18039 [[liter]]s, equaling ten [[shaku]] or a tenth of a [[shō]]
# a tenth of the distance from the base to the summit of a mountain
=====See also=====
* {{ja-r|升|ます}}
* {{ja-r|勺|しゃく}}
===Etymology 2===
{{ja-kanjitab|あ|o1=い|yomi=k}}
{{ja-see|あい}}
{{C|ja|Units of measure|sort=こう'}}
==Korean==
===Etymology===
{{rfe|ko|Middle Korean readings, if any}}
===Pronunciation===
{{ko-hanja-pron|합}}
===Hanja===
{{ko-hanja-search}}
{{ko-hanja|합하다|합할|합}}
# {{hanja form of|합|to [[unite]]}}
==Vietnamese==
===Han character===
{{vi-readings|rs=口03
|hanviet=hợp-tdcndg;tdcntd;tvctdhv;hvttd;tchvtd, hiệp-tvctdhv;hvttd;tchvtd, hạp-gdhn;tvctdhv, cáp-tvctdhv;hvttd
|phienthiet=
|nom=hợp-tdcndg;tdcntd;gdhn, họp-tdcndg;tdcntd;gdhn, hiệp-tdcndg;tdcntd, hạp-gdhn, cáp-tdcndg, cóp-gdhn, gộp-gdhn, sáp-tdcndg
}}
# {{vi-Han form of|hợp|to [[unite]], [[suitable]]}}
# {{vi-Han form of|hiệp|[[unite]]}}
# {{vi-Han form of|hạp|to [[unite]], [[suitable]] (rare spelling of {{l|vi|hợp}})}}
# {{vi-Nom form of|sáp|flexible material used to store cosmetics}}
===References===
<references/>
16hyau2nmdevlhp2i0pdktj7y4rl4x0
193360
193359
2024-11-20T13:15:37Z
Pinthura
2424
යොමු තොරතුරු පරිවර්තනය
193360
wikitext
text/x-wiki
{{also|台|🈴|閤|会|슴}}
{{character info}}
== සර්ව භාෂාමය ==
{{stroke order|strokes=6}}
=== හන් අනුලක්ෂණය ===
{{Han char|rn=30|rad=口|as=03|sn=6|four=80601|canj=OMR|ids=⿱亼口}}
==== ව්යුත්පන්න අනුලක්ෂණ ====
* {{lang-lite|mul|[[𫡥]], [[佮]], [[冾]], [[哈]], [[垥]], [[姶]], [[峆]], [[帢]], [[㢵]], [[㣛]], [[恰]], [[拾]], [[洽]], [[𪯤]], [[𰃣]], [[䏩]], [[㭘]], [[烚]], [[𬌗]], [[珨]], [[䢔]], [[䀫]], [[硆]], [[祫]], [[秴]], [[粭]], [[給]] ([[给]]), [[耠]], [[𦕲]], [[䑪]], [[蛤]], [[袷]], [[鿘]], [[詥]], [[𧳇]], [[䞩]], [[跲]], [[鉿]] ([[铪]]), [[鞈]], [[韐]], [[餄]] ([[饸]]), [[鿢]], [[𩩂]], [[𩳋]], [[鮯]] ([[𫚗]]), [[𪑇]], [[䶎]], [[𪘁]], [[㓣]], [[𪠁]], [[郃]], [[㧱]], [[敆]], [[㪉]], [[欱]], [[𤙖]], [[翖]], [[䧻]], [[頜]] ([[颌]]), [[鴿]] ([[鸽]]), [[搿]]}}
* {{lang-lite|mul|[[𣭝]], [[㝓]], [[峇]], [[𰂅]], [[𣆗]], [[荅]], [[䆟]], [[答]], [[𠷡]], [[䨐]], [[𩭆]], [[䶀]], [[𮯚]], [[鿖]], [[𡋛]], [[𪦻]], [[弇]], [[𢙅]], [[拿]], [[𤥓]], [[畣]], [[㿯]], [[盒]], [[𥅽]], [[翕]], [[搻]], [[樖]], [[盫]], [[龕]] ([[龛]]), [[𡄬]], [[匌]], [[匼]], [[㕉]], [[𢈈]], [[㾑]], [[閤]] ([[𬮤]]), [[𦒈]], [[𡇞]]}}
==== මූලාශ්ර ====
{{Han ref|kx=0174.240|dkj=03287|dj=0387.170|hdz=10581.010|uh=5408}}
== චීන ==
{{zh-wp|zh}}
===Glyph origin===
{{Han etym}}
{{Han compound|亼|口|t2=mouth|ls=ic}} : Two mouths speaking together. See also [[會]].
=== නිරුක්තිය 1 ===
{{zh-forms}}
From {{inh-lite|zh|sit-pro|*gap ~ kap}}. Compare {{och-l|蓋|to cover}} and {{och-l|盍|to unite}} (STEDT). Outside of Sinitic, possibly cognate with {{cog-lite|my|sc=Mymr|ကပ်|tr=kap|t=to approach}}.
==== උච්චාරණය ====
{{zh-pron
|m=hé
|m-s=ho2
|c=hap6
|c-t=hap5,gap4
|c-t_note=gap4 - in {{zh-l|三合|tr=-|t=Sanhe, Taishan}}
|g=hot7
|h=pfs=ha̍p/kap/kak;gd=hab6
|j=hah5
|md=hăk
|mn=ml,jj,tw,ph:ha̍p/ml,jj,tw,ph:ha̍h/zz:a̍h
|mn_note=ha̍p - literary (“to close; to join; to add up to; classifier for matching sets”); ha̍h/a̍h - vernacular (“to suit; to get along well”)
|mn-t=hah8/hab8/gah4
|mn-t_note=hab8 - alternative pronunciation for "to close"; gah4 - "to fit", etc.
|w=sh:8gheq,7keq
|w_note=hheq (T5) - regular pronunciation for "together", etc; keq (T4) - alternative pronunciation in certain words like 合算
|mc=1
|oc=2,2
|ma=
|cat=v,a,n,pn
}}
====Definitions====
{{head|zh|hanzi}}
# to [[close]]; to [[shut]]
#: {{zh-x|合上|close}}
# to [[join]]; to [[combine]]; to [[unite]]; to [[bring]] [[together]]
#: {{zh-x|組合|to assemble}}
#: {{zh-x|集合|to gather, to assemble}}
# to [[suit]]; to [[fit]]
#: {{zh-x|適合|to fit, to suit}}
#: {{zh-x|合適|suitable, fitting}}
#: {{zh-x|魚 和 紅酒 不合。|Fish and red wine don't '''go well together'''.}}
#: {{zh-x|那 個 店 裡 沒有 合 我 尺寸 的 帽子。|There are no hats in that store that '''fit''' me.}}
# to have [[sexual intercourse]]
# to [[fight]], to have a [[confrontation]] with
# to [[add up]] to; to be [[equivalent]] to; to [[amount]] to
#: {{zh-x|一 公頃 合 十五 市畝。|A hectare '''is equivalent to''' 15 mu.}}
# [[spouse]]
# [[whole]]; [[entire]]
# [[together]]
#: {{zh-x|這 張 卡片 是 我們 全家 合 送 的。祝 您 母親節 快樂{lè}。|This card comes from the '''whole''' family. Happy Mother's Day.}}
# {{†}} {{alt form|zh|盒|t=box}}
# {{lb|zh|game|battle}} [[round]]
# {{lb|zh|astronomy}} [[conjunction]]
# {{lb|zh|Hokkien}} to [[get along]] well
# {{lb|zh|Chinese phonetics}} {{zh-short|合口|closed-mouthed}}
# {{lb|zh|Xiamen}} {{zh-classifier|matching sets of instruments such as teaware}}
# {{†}} {{zh-classifier|number of fights}}
# {{†}} to [[compound]]; to [[make up]] {{gl|medicine, etc.}}
#* {{zh-x|又 合 狂藥,令 人 服 之,父子 兄弟 不 相 知識,唯 以 殺害 為{wéi} 事。|ref=Weishu}}
#* {{zh-x|誰 敢 合 毒藥 與 你?這廝 好 大膽 也。|ref=Dou E Yuan}}
#* {{zh-x|吾 家 ^@葛 ^@巾 娘子,手 合 鴆湯,其 速 飲!|ref=Liaozhai}}
# {{surname|zh}}
====Compounds====
{{col3|zh|三合|不合|九合|交合|付合|偶合|公合|六合|分合|切合|勘合|化合|匡合|匯合|印合|只合|合上|合下|合伙|合作|合併|合傳|合兒|合刃|合券|合刻|合劑|合力|合十|合口|合同|合唱|合圍|合夥|合奏|合婚|合子|合宅|合宜|合家|合局|合巹|合并|合度|合式|合弦|合影|合後|合從|合意|合成|合戰|合手|合折|合抱|合拍|合掌|合撲|合擊|合攏|合數|合族|合昏|合是|合時|合朔|合本|合板|合格|合機|合款|合歡|合殺|合氣|合沓|合法|合注|合流|合準|合溜|合演|合火|合照|合營|合獨|合球|合理|合璧|合生|合用|合當|合眼|合租|合窆|合符|合算|合約|合縱|合纖|合群|合聲|合股|合苦|合著|合葉|合葬|合計|合該|合謀|合變|合谷|合資|合身|合轍|合辦|合造|合遝|合適|合醵|合金|合鏡|合面|合音|合頭|合體|合髻|合龍|吻合|和合|咬合|四合|回合|場合|夜合|契合|好合|密合|寡合|對合|巧合|布合|愈合|成合|打合|投合|折合|拌合|捏合|捻合|接合|揉合|摻合|撮合|攙合|整合|暌合|暗合|會合|比合|沓合|混合|湊合|溶合|烏合|熱合|牉合|牽合|理合|瓦合|當合|癒合|百合|相合|砌合|符合|簇合|糅合|糾合|納合|索合|組合|結合|統合|綜合|綴合|縫合|縮合|總合|耦合|聚合|聯合|苟合|融合|蟻合|複合|訢合|說合|調合|跑合|迎合|迴合|連合|遇合|適合|配合|重合|野合|鈿合|關合|集合|離合|鬥合|鳩合|黏合|齧合|久合垸|三合院|三合垸|合心|拼合|三合會|志同道合|媾合|合時宜|固拉合瑪|合浦|西合休|阿合雅|合肥|nan:袂合<tr:bē ha̍h>|吾合沙魯}}
====Descendants====
{{CJKV||||hợp}}
=== නිරුක්තිය 2 ===
{{zh-forms}}
==== උච්චාරණය ====
{{zh-pron
|m=gě
<!--
|m-s=go2
|m-s_note=substitution character
-->
|c=gap3
|h=pfs=khap
|mn=ml,jj,tw:kap
|mn_note=literary reading
|mn-t=hah8
|mc=2
|oc=1,1
|cat=n
}}
====Definitions====
{{head|zh|hanzi}}
# unit of volume, equal to one tenth of a {{zh-l|升}}
## {{lb|zh|Han dynasty}} equal to 2 {{zh-l|龠}}
=== නිරුක්තිය 3 ===
{{zh-forms}}
==== උච්චාරණය ====
{{zh-pron
|m=hé
|c=ho4
|h=
|mn=xm:hōⁿ
|mn-t=ho5
|cat=n
}}
====Definitions====
{{head|zh|hanzi}}
# {{lb|zh|music}} {{ng|[[Kunqu]] [[gongche]] [[notation]] for the [[note]] [[low sol]] (5̣).}}
# {{lb|zh|music}} {{ng|[[Cantonese opera]] [[gongche]] [[notation]] for the [[note]] [[low sol]] (5̣).}}
====Compounds====
{{col3|zh|合尺}}
==== ව්යුත්පන්න යෙදුම් ====
{{col3|zh|𪛗|yue:佮<tr:ho4>}}
=== නිරුක්තිය 4 ===
{{zh-forms}}
==== උච්චාරණය ====
{{zh-pron
|mn=kap/kah
|mn_note=kap - literary; kah - vernacular
|mn-t=gah4
|cat=v,mn:conj,mn-t:conj,mn:prep,mn-t:prep
}}
====Definitions====
{{head|zh|hanzi}}
# {{lb|zh|Hokkien|Teochew}} {{alt form|zh|佮|tr=-|t=[[and]]; [[with]]}}
=== නිරුක්තිය 5 ===
{{zh-forms|alt=敆}}
==== උච්චාරණය ====
{{zh-pron
|mn=kap
|mn-t=gab4
|mn-t_note=gab4 - "to combine" (e.g. 合藥)
|cat=v
}}
====Definitions====
{{head|zh|hanzi}}
# {{lb|zh|Hokkien|Teochew}} to [[wrap]] a [[book]] {{gl|with paper, etc.}}
# {{lb|zh|Hokkien|TCM}} to have a [[traditional Chinese medicine]] [[prescription]] [[filled]] {{gl|of the patient}}
# {{lb|zh|Mainland|_|Hokkien}} to [[embrace]] in one's [[bosom]]
# {{lb|zh|Mainland|_|Hokkien|figurative}} to [[watch over]]; to [[look after]]; to [[take care]] of {{gl|a child, etc.}}
# {{lb|zh|Taiwan Hokkien}} to [[match]]; to [[go]] with
# {{lb|zh|Teochew}} to [[concoct]] {{gl|to make medicine, etc.}}
# {{lb|zh|Teochew}} to [[form]] a [[partnership]]
# {{lb|zh|Teochew}} to [[bind]] {{gl|into a book}}
=====Compounds=====
{{col3|zh|合澉仔|合簀仔|合冊|合藥|合帳|合縫|合線|合房|合萬|合萬櫃|合喙|合逝|合食|合錢|合磨|合跤合手|合合磨}}
=== නිරුක්තිය 5 ===
{{zh-see|閤|s|to [[close]], to [[shut]]; [[all of]], [[whole]] of}}
=== නිරුක්තිය 6 ===
{{zh-see|盒|ss}}
=== මූලාශ්ර ===
* {{R:yue:Hanzi}}
{{cat|cmn|Elementary Mandarin}}
== ජපන් ==
=== කන්ජි ===
{{ja-kanji|grade=2|rs=口03}}
# [[fit]]
# [[suit]]
# [[join]]
# one [[tenth]]
====Readings====
From {{der|ja|ltc|-}} {{ltc-l|合|id=1}}; compare {{cog|cmn|合|tr=hé, gě}}:
{{ja-readings
|goon=ごう<ごふ
|kanon=こう<かふ
|kanyoon=かっ, がっ, ごう<がふ
}}
From {{der|ja|ltc|-}} {{ltc-l|合|id=2}}; compare {{cog|cmn|合|tr=gě}}:
{{ja-readings
|goon=こう<こふ
|kanon=こう<かふ
}}
From native {{w|Japanese_language|Japanese}} roots:
{{ja-readings
|kun=あ-う<あ-ふ, あ-わす<あ-はす, あ-わせる<あ-はせる, あ-い<あ-ひ, あつ-まる, あつ-める, あわい-<あはひ-, あ-わさる<あ-はさる, あわ-せ<あは-せ
|nanori=あい, あう, かい, はる, よし
}}
====Compounds====
* {{ja-r|合%憎|あい%にく|gloss=[[unfortunately]]}}
* {{ja-r|連%合|れん%ごう|gloss=[[union]], [[alliance]], [[combination]]}}
* {{ja-r|総%合|そう%ごう|gloss=synthesis, consolidation}}
=== නිරුක්තිය 1 ===
{{ja-kanjitab|ごう|yomi=o}}
====Counter====
{{ja-pos|counter|hhira=がふ|ごう}}
# covered [[container]]s
# [[battle]]s
==== නාම පදය ====
{{ja-noun|ごう|hhira=がふ}}
# 0.18039 [[liter]]s, equaling ten [[shaku]] or a tenth of a [[shō]]
# a tenth of the distance from the base to the summit of a mountain
===== අමතර අවධානයට =====
* {{ja-r|升|ます}}
* {{ja-r|勺|しゃく}}
=== නිරුක්තිය 2 ===
{{ja-kanjitab|あ|o1=い|yomi=k}}
{{ja-see|あい}}
{{C|ja|Units of measure|sort=こう'}}
== කොරියානු ==
=== නිරුක්තිය ===
{{rfe|ko|Middle Korean readings, if any}}
=== උච්චාරණය ===
{{ko-hanja-pron|합}}
===Hanja===
{{ko-hanja-search}}
{{ko-hanja|합하다|합할|합}}
# {{hanja form of|합|to [[unite]]}}
== වියට්නාම ==
=== හන් අනුලක්ෂණය ===
{{vi-readings|rs=口03
|hanviet=hợp-tdcndg;tdcntd;tvctdhv;hvttd;tchvtd, hiệp-tvctdhv;hvttd;tchvtd, hạp-gdhn;tvctdhv, cáp-tvctdhv;hvttd
|phienthiet=
|nom=hợp-tdcndg;tdcntd;gdhn, họp-tdcndg;tdcntd;gdhn, hiệp-tdcndg;tdcntd, hạp-gdhn, cáp-tdcndg, cóp-gdhn, gộp-gdhn, sáp-tdcndg
}}
# {{vi-Han form of|hợp|to [[unite]], [[suitable]]}}
# {{vi-Han form of|hiệp|[[unite]]}}
# {{vi-Han form of|hạp|to [[unite]], [[suitable]] (rare spelling of {{l|vi|hợp}})}}
# {{vi-Nom form of|sáp|flexible material used to store cosmetics}}
=== මූලාශ්ර ===
<references/>
36yh2klf68x2s1cf1gtstw1c924ox0s
character
0
125479
193366
2024-11-21T07:52:14Z
Lee
19
'== ඉංග්රීසි == === නාම පදය === {{en-noun}} # {{l|si|අනුලක්ෂණය}}' යොදමින් නව පිටුවක් තනන ලදි
193366
wikitext
text/x-wiki
== ඉංග්රීසි ==
=== නාම පදය ===
{{en-noun}}
# {{l|si|අනුලක්ෂණය}}
jbujljebrwf9oe482rbqpa0jjdbkfdq
193367
193366
2024-11-21T07:52:36Z
Lee
19
193367
wikitext
text/x-wiki
== ඉංග්රීසි ==
=== නාම පදය ===
{{en-noun}}
# {{l|si|අනුලක්ෂණ}}
821jiwevk37f2jysx53vn3rt6xw2nzh
193368
193367
2024-11-21T07:56:53Z
Lee
19
නිර්මාණය
193368
wikitext
text/x-wiki
{{also|charácter}}
== ඉංග්රීසි ==
===Etymology===
From {{der|en|enm|caracter}}, from {{der|en|fro|caractere}}, from {{der|en|la|character}}, from {{der|en|grc|χαρακτήρ||type, nature, character}}, from {{m|grc|χαράσσω||I engrave}}. {{doublet|en|charakter}}.
===Pronunciation===
* {{IPA|en|/ˈkæɹɪktə/|a=RP}}
* {{a|en|GenAm}}
** {{IPA|en|/ˈkæɹ(ə)ktɚ/|a=nMmmm}}
** {{IPA|en|/ˈkɛɹ(ə)ktɚ/|a=Mmmm}}
* {{audio|en|en-us-character.ogg|a=US}}
* {{hyphenation|en|char|ac|ter, cha|rac|ter}}
=== නාම පදය ===
{{en-noun|~}}
# {{l|si|අනුලක්ෂණ}}
# {{rfdef|en}}
====Hyponyms====
{{col4|en|bell character|Chinese character|control character|delete character|dominant character|escape character|cartoon character|null character|player character|round character|staple character|stock character|Hebrew character|Han character|Hàn character|main character|main-character syndrome|non-printing character}}
====Derived terms====
{{col-auto|en|get into character|bicharacter|biocharacter|characterful|characterhood|characterism|characterist|characterlike|characternym|characterologist|characterology|characteropathy|characterwise|charactery|charactonym|charactron|cocharacter|demicharacter|intercharacter|megacharacter|metacharacter|microcharacter|multicharacter|noncharacter|pseudocharacter|replacement character|subcharacter|supercharacter
|character beat|character arc|CJK character|CJKV character
|[[characterise]] / [[characterize]]
|[[characterisation]] / [[characterization]]
|characteristic
|characterless|character piece|intelligent character recognition
|Dirichlet character
|character actor
|character assassination
|character class
|character encoding
|character recognition
|character set
|character study
|character theory
|Chinese character
|in character
|out of character
|ASA character
|base character
|big-character poster
|box-drawing character
|break character
|breakout character
|build character
|carriage control character
|character actress
|character amnesia
|character cell
|character density
|character disorder
|character generator
|character man
|character map
|character part
|character reference
|character shoe
|character trait
|character user interface
|character witness
|character-based
|character-building
|character-forming
|combining character
|ghost character
|Han character
|Hebrew character
|lead character
|main character syndrome
|non-player character
|non-printable character
|optical character recognition
|original character
|out-of-character
|private-use character
|setaceous Hebrew character
|special character
|supplementary character
|title character
}}
{{lookfrom|en|character}}
====Descendants====
* {{desc|gd|caractar|bor=1}}
===Verb===
{{en-verb}}
# {{rfdef|en}}
===See also===
* {{l|en|codepoint}}
* {{l|en|font}}
* {{l|en|glyph}}
* {{l|en|letter}}
* {{l|en|symbol}}
* {{l|en|rune}}
* {{l|en|pictogram}}
==Latin==
===Etymology===
From the {{der|la|grc|χαρακτήρ}}.
===Pronunciation===
* {{la-IPA|charactēr}}
===Noun===
{{la-noun|charactēr<3>|g=m}}
# [[branding iron]]
# [[brand]] (made by a branding iron)
# [[characteristic]], [[mark]], [[#English|character]], [[style]]
#: {{syn|la|ingenium|nātūra|habitus|mēns|indolēs}}
====Declension====
{{la-ndecl|charactēr<3>}}
====Descendants====
* {{desc|ast|caráuter}}
* {{desc|hu|karakter}}
* {{desc|gl|caritel}}; {{desc|gl|carácter|bor=1|nolb=1}}
* {{desctree|sga|carachtar|bor=1}}
* {{desc|it|carattere}}
* {{desctree|zlw-ocs|charakter|lbor=1}}
* {{desctree|fro|caractere}}
* {{desctree|zlw-osk|charakter|lbor=1}}
* {{desctree|pl|charakter|lbor=1}}
* {{desc|pt|caractere|carácter}}
* {{desc|ro|caracter}}
* {{desc|scn|caràttiri}}
* {{desc|es|carácter}}
===References===
* {{R:L&S}}
* {{R:Gaffiot}}
* {{R:NLW}}
==Portuguese==
===Noun===
{{pt-noun|m}}
# {{pt-pre-reform|caráter|br=43|pt=11}}
ee2o48nssery6p8nlg69ycoc464qitj
193403
193368
2024-11-21T08:33:43Z
Pinthura
2424
යොමු තොරතුරු පරිවර්තනය
193403
wikitext
text/x-wiki
{{also|charácter}}
== ඉංග්රීසි ==
=== නිරුක්තිය ===
From {{der|en|enm|caracter}}, from {{der|en|fro|caractere}}, from {{der|en|la|character}}, from {{der|en|grc|χαρακτήρ||type, nature, character}}, from {{m|grc|χαράσσω||I engrave}}. {{doublet|en|charakter}}.
=== උච්චාරණය ===
* {{IPA|en|/ˈkæɹɪktə/|a=RP}}
* {{a|en|GenAm}}
** {{IPA|en|/ˈkæɹ(ə)ktɚ/|a=nMmmm}}
** {{IPA|en|/ˈkɛɹ(ə)ktɚ/|a=Mmmm}}
* {{audio|en|en-us-character.ogg|a=US}}
* {{hyphenation|en|char|ac|ter, cha|rac|ter}}
=== නාම පදය ===
{{en-noun|~}}
# {{l|si|අනුලක්ෂණ}}
# {{rfdef|en}}
====Hyponyms====
{{col4|en|bell character|Chinese character|control character|delete character|dominant character|escape character|cartoon character|null character|player character|round character|staple character|stock character|Hebrew character|Han character|Hàn character|main character|main-character syndrome|non-printing character}}
==== ව්යුත්පන්න යෙදුම් ====
{{col-auto|en|get into character|bicharacter|biocharacter|characterful|characterhood|characterism|characterist|characterlike|characternym|characterologist|characterology|characteropathy|characterwise|charactery|charactonym|charactron|cocharacter|demicharacter|intercharacter|megacharacter|metacharacter|microcharacter|multicharacter|noncharacter|pseudocharacter|replacement character|subcharacter|supercharacter
|character beat|character arc|CJK character|CJKV character
|[[characterise]] / [[characterize]]
|[[characterisation]] / [[characterization]]
|characteristic
|characterless|character piece|intelligent character recognition
|Dirichlet character
|character actor
|character assassination
|character class
|character encoding
|character recognition
|character set
|character study
|character theory
|Chinese character
|in character
|out of character
|ASA character
|base character
|big-character poster
|box-drawing character
|break character
|breakout character
|build character
|carriage control character
|character actress
|character amnesia
|character cell
|character density
|character disorder
|character generator
|character man
|character map
|character part
|character reference
|character shoe
|character trait
|character user interface
|character witness
|character-based
|character-building
|character-forming
|combining character
|ghost character
|Han character
|Hebrew character
|lead character
|main character syndrome
|non-player character
|non-printable character
|optical character recognition
|original character
|out-of-character
|private-use character
|setaceous Hebrew character
|special character
|supplementary character
|title character
}}
{{lookfrom|en|character}}
====Descendants====
* {{desc|gd|caractar|bor=1}}
=== ක්රියා පදය ===
{{en-verb}}
# {{rfdef|en}}
=== අමතර අවධානයට ===
* {{l|en|codepoint}}
* {{l|en|font}}
* {{l|en|glyph}}
* {{l|en|letter}}
* {{l|en|symbol}}
* {{l|en|rune}}
* {{l|en|pictogram}}
== ලතින් ==
=== නිරුක්තිය ===
From the {{der|la|grc|χαρακτήρ}}.
=== උච්චාරණය ===
* {{la-IPA|charactēr}}
=== නාම පදය ===
{{la-noun|charactēr<3>|g=m}}
# [[branding iron]]
# [[brand]] (made by a branding iron)
# [[characteristic]], [[mark]], [[#English|character]], [[style]]
#: {{syn|la|ingenium|nātūra|habitus|mēns|indolēs}}
==== වරනැඟීම ====
{{la-ndecl|charactēr<3>}}
====Descendants====
* {{desc|ast|caráuter}}
* {{desc|hu|karakter}}
* {{desc|gl|caritel}}; {{desc|gl|carácter|bor=1|nolb=1}}
* {{desctree|sga|carachtar|bor=1}}
* {{desc|it|carattere}}
* {{desctree|zlw-ocs|charakter|lbor=1}}
* {{desctree|fro|caractere}}
* {{desctree|zlw-osk|charakter|lbor=1}}
* {{desctree|pl|charakter|lbor=1}}
* {{desc|pt|caractere|carácter}}
* {{desc|ro|caracter}}
* {{desc|scn|caràttiri}}
* {{desc|es|carácter}}
=== මූලාශ්ර ===
* {{R:L&S}}
* {{R:Gaffiot}}
* {{R:NLW}}
== පෘතුගීසි ==
=== නාම පදය ===
{{pt-noun|m}}
# {{pt-pre-reform|caráter|br=43|pt=11}}
joofvnk5oo1qiof8hx4ddo7qba78he1
අනුලක්ෂණය
0
125480
193369
2024-11-21T07:57:38Z
Lee
19
'== සිංහල == === නිරුක්තිය === {{rfe|si}} === නාම පදය === {{si-noun}} # {{singular of|si|අනුලක්ෂණ}} <!-- ==== පරිවර්තන ==== {{trans-top|පරිවර්තන}} * ඉංග්රීසි: {{t|en|<<ඉංග්රීසි වචනය>>}} {{trans-bottom}} === අමතර අවධානයට === * {{l|si|<<ආශ්රිත ප...' යොදමින් නව පිටුවක් තනන ලදි
193369
wikitext
text/x-wiki
== සිංහල ==
=== නිරුක්තිය ===
{{rfe|si}}
=== නාම පදය ===
{{si-noun}}
# {{singular of|si|අනුලක්ෂණ}}
<!--
==== පරිවර්තන ====
{{trans-top|පරිවර්තන}}
* ඉංග්රීසි: {{t|en|<<ඉංග්රීසි වචනය>>}}
{{trans-bottom}}
=== අමතර අවධානයට ===
* {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}}
-->
7rv6osozovsa3c29yuczml3n29aok1v
අනුලක්ෂණ
0
125481
193370
2024-11-21T07:58:12Z
Lee
19
'== සිංහල == === නිරුක්තිය === {{rfe|si}} === නාම පදය === {{si-noun}} # {{rfdef|si}} ==== පරිවර්තන ==== {{trans-top|පරිවර්තන}} * ඉංග්රීසි: {{t|en|character}} {{trans-bottom}} <!-- === අමතර අවධානයට === * {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}} -->' යොදමින් නව පිටුවක් තනන ලදි
193370
wikitext
text/x-wiki
== සිංහල ==
=== නිරුක්තිය ===
{{rfe|si}}
=== නාම පදය ===
{{si-noun}}
# {{rfdef|si}}
==== පරිවර්තන ====
{{trans-top|පරිවර්තන}}
* ඉංග්රීසි: {{t|en|character}}
{{trans-bottom}}
<!--
=== අමතර අවධානයට ===
* {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}}
-->
rscdt7nk70embuhtbz8bgd1b92x66yt
193371
193370
2024-11-21T07:59:06Z
Lee
19
193371
wikitext
text/x-wiki
== සිංහල ==
=== නිරුක්තිය ===
{{rfe|si}}
=== නාම පදය ===
{{si-noun}}
# {{rfdef|si}}
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|character}} සඳහා වන සිංහල පදය.
==== පරිවර්තන ====
{{trans-top|පරිවර්තන}}
* ඉංග්රීසි: {{t|en|character}}
{{trans-bottom}}
<!--
=== අමතර අවධානයට ===
* {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}}
-->
20ikp86ai8kp1evu8i9id1rwsx9069v
193373
193371
2024-11-21T08:01:15Z
Lee
19
193373
wikitext
text/x-wiki
== සිංහල ==
=== නිරුක්තිය ===
{{rfe|si}}
=== නාම පදය ===
{{si-noun}}
# {{rfdef|si}}
# {{සිංහල වික්ෂණරියේ භාවිතය|character}}
==== පරිවර්තන ====
{{trans-top|පරිවර්තන}}
* ඉංග්රීසි: {{t|en|character}}
{{trans-bottom}}
<!--
=== අමතර අවධානයට ===
* {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}}
-->
cno12na6q0k4mg5073iuvzs5fajudu1
සැකිල්ල:සිංහල වික්ෂණරියේ භාවිතය
10
125482
193372
2024-11-21T08:00:54Z
Lee
19
නිර්මාණය
193372
wikitext
text/x-wiki
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|{{{1|}}}}} සඳහා වන සිංහල පදය.<noinclude>
{{උපදෙස්}}
</noinclude>
rrt9iyxcazk2pxhpuz20gcsbrhk24m5
193377
193372
2024-11-21T08:04:52Z
Lee
19
193377
wikitext
text/x-wiki
[[ප්රවර්ගය:ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන]]
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|{{{1|}}}}} සඳහා වන සිංහල පදය.<noinclude>
{{උපදෙස්}}
</noinclude>
533xo3fuc1o4fh5ylqmehzxxoe7wtsi
193378
193377
2024-11-21T08:06:39Z
Lee
19
193378
wikitext
text/x-wiki
{{#ifeq:{{NAMESPACE}}|සැකිල්ල||
[[ප්රවර්ගය:ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන]]
}}
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|{{{1|}}}}} සඳහා වන සිංහල පදය.<noinclude>
{{උපදෙස්}}
</noinclude>
tlkihfc8u41jdsp6ujqwbddu35xhn9y
සැකිල්ල:සිංහල වික්ෂණරියේ භාවිතය/documentation
10
125483
193374
2024-11-21T08:02:51Z
Lee
19
'{{උපදෙස් උප පිටුව}} සිංහල වික්ෂණරි ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන සඳහා භාවිතා වන සැකිල්ල. මතු දැක්වෙන ආකාරයේ ප්රතිදානයක් ලබා දෙයි: # සිංහල වික්ෂණරියේ...' යොදමින් නව පිටුවක් තනන ලදි
193374
wikitext
text/x-wiki
{{උපදෙස් උප පිටුව}}
සිංහල වික්ෂණරි ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන සඳහා භාවිතා වන සැකිල්ල.
මතු දැක්වෙන ආකාරයේ ප්රතිදානයක් ලබා දෙයි:
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|character}} සඳහා වන සිංහල පදය.
0kl7e3zpbx8z7ypd3xw9m00lq0qpo97
193375
193374
2024-11-21T08:03:40Z
Lee
19
193375
wikitext
text/x-wiki
{{උපදෙස් උප පිටුව}}
සිංහල වික්ෂණරි ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන සඳහා භාවිතා වන සැකිල්ල.
මතු දැක්වෙන ආකාරයේ ප්රතිදානයක් ලබා දෙයි:
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|character}} සඳහා වන සිංහල පදය.
<noinclude>
[[ප්රවර්ගය:විශේෂ සැකිලි]]
</noinclude>
tgvo6kibatajs8sfq1f3im8xgohazo6
193376
193375
2024-11-21T08:03:59Z
Lee
19
193376
wikitext
text/x-wiki
{{උපදෙස් උප පිටුව}}
සිංහල වික්ෂණරි ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන සඳහා භාවිතා වන සැකිල්ල.
මතු දැක්වෙන ආකාරයේ ප්රතිදානයක් ලබා දෙයි:
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|character}} සඳහා වන සිංහල පදය.
<includeonly>
[[ප්රවර්ගය:විශේෂ සැකිලි]]
</includeonly>
4lp4eoprpo417v75ynibi7cklr4pzvu
193383
193376
2024-11-21T08:11:41Z
Lee
19
193383
wikitext
text/x-wiki
{{උපදෙස් උප පිටුව}}
{{shortcut|සැකිල්ල:සම්මත}}
සිංහල වික්ෂණරි ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන සඳහා භාවිතා වන සැකිල්ල.
මතු දැක්වෙන ආකාරයේ ප්රතිදානයක් ලබා දෙයි:
# සිංහල වික්ෂණරියේ භාවිතය, {{l|en|character}} සඳහා වන සිංහල පදය.
<includeonly>
[[ප්රවර්ගය:විශේෂ සැකිලි]]
</includeonly>
b2djv1czqqm5zevj2vmg4rk9eqolckg
ප්රවර්ගය:ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන
14
125484
193379
2024-11-21T08:07:32Z
Lee
19
නිර්මාණය
193379
wikitext
text/x-wiki
[[ප්රවර්ගය:විශේෂ ප්රවර්ග]]
bm9wizwr8g59cud2el9gfgsu7e46wvv
193380
193379
2024-11-21T08:07:50Z
Lee
19
193380
wikitext
text/x-wiki
{{නිරීක්ෂණ ප්රවර්ගය}}
[[ප්රවර්ගය:විශේෂ ප්රවර්ග]]
1mydvowy978w3dib104v3o413jurzkj
193381
193380
2024-11-21T08:09:10Z
Lee
19
193381
wikitext
text/x-wiki
{{නිරීක්ෂණ ප්රවර්ගය
|text=මෙම ව්යාපෘතියේ සම්මත ලෙස භාවිතා වන පරිවර්තන මෙහි ලැයිස්තු ගත වෙයි.
}}
[[ප්රවර්ගය:විශේෂ ප්රවර්ග]]
gmigzail35j8spi917r922a4fhpvwmh
ප්රවර්ගය:විශේෂ ප්රවර්ග
14
125485
193382
2024-11-21T08:09:41Z
Lee
19
නිර්මාණය
193382
wikitext
text/x-wiki
[[ප්රවර්ගය:ප්රවර්ග]]
5rvqip8mtp1l1horii95vcz0rnypath
සැකිල්ල:සම්මත
10
125486
193384
2024-11-21T08:11:56Z
Lee
19
[[සැකිල්ල:සිංහල වික්ෂණරියේ භාවිතය]] වෙතට යළි-යොමුකරමින්
193384
wikitext
text/x-wiki
#REDIRECT [[සැකිල්ල:සිංහල වික්ෂණරියේ භාවිතය]]
kf6lk9jr2ai1jyyyuovxpltw1wwlvwe
characters
0
125487
193385
2024-11-21T08:12:20Z
Lee
19
නිර්මාණය
193385
wikitext
text/x-wiki
==English==
===Pronunciation===
* {{IPA|en|/ˈkɛɹəktɚz/|/ˈkʰæɹəktɚz/|a=GenAm}}
* {{IPA|en|/ˈkæɹəktəz/|a=RP}}
* {{audio|en|en-us-characters.ogg|a=US}}
* {{hyphenation|en|char|ac|ters}}
===Noun===
{{head|en|noun form}}
# {{plural of|en|character}}
48v007xytbr59rbig0w74sup80ic7hp
193402
193385
2024-11-21T08:33:23Z
Pinthura
2424
යොමු තොරතුරු පරිවර්තනය
193402
wikitext
text/x-wiki
== ඉංග්රීසි ==
=== උච්චාරණය ===
* {{IPA|en|/ˈkɛɹəktɚz/|/ˈkʰæɹəktɚz/|a=GenAm}}
* {{IPA|en|/ˈkæɹəktəz/|a=RP}}
* {{audio|en|en-us-characters.ogg|a=US}}
* {{hyphenation|en|char|ac|ters}}
=== නාම පදය ===
{{head|en|noun form}}
# {{plural of|en|character}}
it7nokejvv1zymsnixtx5t0q4dbtqzc
characteres
0
125488
193386
2024-11-21T08:12:38Z
Lee
19
නිර්මාණය
193386
wikitext
text/x-wiki
{{also|charácteres}}
==Latin==
===Noun===
{{head|la|noun form|head=charactērēs}}
# {{inflection of|la|character||nom//acc//voc|p}}
hp6cz1eo5ahgu4gsazdo27l98jsfbzz
193401
193386
2024-11-21T08:33:03Z
Pinthura
2424
යොමු තොරතුරු පරිවර්තනය
193401
wikitext
text/x-wiki
{{also|charácteres}}
== ලතින් ==
=== නාම පදය ===
{{head|la|noun form|head=charactērēs}}
# {{inflection of|la|character||nom//acc//voc|p}}
hfp6ud88x50vz2zlg53yeb8fb0dbvv8
characteris
0
125489
193387
2024-11-21T08:12:54Z
Lee
19
නිර්මාණය
193387
wikitext
text/x-wiki
==Latin==
===Noun===
{{head|la|noun form|head=charactēris}}
# {{inflection of|la|character||gen|s}}
dyepe6xyrwhf97upxmzlydpx5hrlcq0
193400
193387
2024-11-21T08:32:43Z
Pinthura
2424
යොමු තොරතුරු පරිවර්තනය
193400
wikitext
text/x-wiki
== ලතින් ==
=== නාම පදය ===
{{head|la|noun form|head=charactēris}}
# {{inflection of|la|character||gen|s}}
drrjh6al4m41inmu6p4r92w0f5o1b9m
ප්රවර්ගය:ලතින් ප්රවේශ, දෝෂ සහගත භාෂා ශීර්ෂක සහිත
14
125490
193404
2024-02-04T13:07:27Z
en>Theknightwho
0
Created page with "{{auto cat}}"
193404
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193405
193404
2024-11-21T08:41:03Z
Lee
19
[[:en:Category:Latin_entries_with_incorrect_language_header]] වෙතින් එක් සංශෝධනයක්
193404
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193406
193405
2024-11-21T08:42:04Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Latin entries with incorrect language header]] සිට [[ප්රවර්ගය:ලතින් ප්රවේශ, දෝෂ සහගත භාෂා ශීර්ෂක සහිත]] වෙත පිටුව ගෙන යන ලදී
193404
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:Latin entries with incorrect language header
14
125491
193407
2024-11-21T08:42:04Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Latin entries with incorrect language header]] සිට [[ප්රවර්ගය:ලතින් ප්රවේශ, දෝෂ සහගත භාෂා ශීර්ෂක සහිත]] වෙත පිටුව ගෙන යන ලදී
193407
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ලතින් ප්රවේශ, දෝෂ සහගත භාෂා ශීර්ෂක සහිත]]
movz94qibhsry2hs8z8pg8wivv4tn9m
ප්රවර්ගය:ලතින් නාම පද ස්වරූප
14
125492
193408
2024-11-21T08:45:31Z
Pinthura
2424
සේවා: [[:[[en:Category:Latin noun forms]]]] තුළ තිබූ පෙළ මෙහි ඇතුළු කිරීම.
193408
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193409
193408
2024-11-21T08:45:41Z
Pinthura
2424
Pinthura විසින් [[ප්රවර්ගය:Latin noun forms]] සිට [[ප්රවර්ගය:ලතින් නාම පද ස්වරූප]] වෙත පිටුව ගෙන යන ලදී: සේවා: නව ප්රවර්ග නාමය වෙත ගෙනයාම.
193408
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193411
193409
2024-11-21T08:45:51Z
Pinthura
2424
සේවා: ඉංග්රීසි ව්යාපෘතිය වෙත සබැඳියක් එක් කිරීම.
193411
wikitext
text/x-wiki
{{auto cat}}
[[en:Category:Latin noun forms]]
jxr46bzj4xgu14f165ys6q4ex43acld
ප්රවර්ගය:Latin noun forms
14
125493
193410
2024-11-21T08:45:41Z
Pinthura
2424
Pinthura විසින් [[ප්රවර්ගය:Latin noun forms]] සිට [[ප්රවර්ගය:ලතින් නාම පද ස්වරූප]] වෙත පිටුව ගෙන යන ලදී: සේවා: නව ප්රවර්ග නාමය වෙත ගෙනයාම.
193410
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ලතින් නාම පද ස්වරූප]]
jm36nv03eb471n76g70vcxr49yht772
193412
193410
2024-11-21T08:46:01Z
Pinthura
2424
සේවා: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම.
193412
wikitext
text/x-wiki
{{category redirect|ලතින් නාම පද ස්වරූප}}
i8fw728g51bp10704segg3ap4cjwqfz
ප්රවර්ගය:Latin නාම පද ස්වරූප
14
125494
193413
2024-11-21T08:46:11Z
Pinthura
2424
සේවා: මෘදු ප්රවර්ග යළියොමුවක් නිර්මාණය.
193413
wikitext
text/x-wiki
{{category redirect|ලතින් නාම පද ස්වරූප}}
i8fw728g51bp10704segg3ap4cjwqfz
සැකිල්ල:R:JLect/documentation
10
125495
193416
2024-06-20T19:28:12Z
en>Chuterix
0
Created page with "{{documentation subpage}} This template cites JLect, a dictionary containing various Japonic words used in Japanese and Ryukyuan dialects. ===Usage=== * {{para|1}} *: The entry name in kana and kanji. * {{para|2}} *: The ID in the JLect entry, following {{code|html|entry/}}. * {{para|3}} *: The romanized form, which is in the URL after the ID. * {{para|accessdate}} *: The access date. ===Example=== For "https://www.jlect.com/entry/670/kutuba/", the following parameters..."
193416
wikitext
text/x-wiki
{{documentation subpage}}
This template cites JLect, a dictionary containing various Japonic words used in Japanese and Ryukyuan dialects.
===Usage===
* {{para|1}}
*: The entry name in kana and kanji.
* {{para|2}}
*: The ID in the JLect entry, following {{code|html|entry/}}.
* {{para|3}}
*: The romanized form, which is in the URL after the ID.
* {{para|accessdate}}
*: The access date.
===Example===
For "https://www.jlect.com/entry/670/kutuba/", the following parameters can be used:
{{temp|R:JLect|くとぅば【言葉・辞・詞・辭】|670|kutuba}} → {{R:JLect|くとぅば【言葉・辞・詞・辭】|670|kutuba}}
<includeonly>
{{reference template cat|ja|kzg|xug|mvi|ryn|okn|ryu|ams|tkn|rys|yoi|yox}}
</includeonly>
3bdq8o9h0s7eqntqm72xjw62q5w7kv7
193417
193416
2024-11-21T09:41:35Z
Lee
19
[[:en:Template:R:JLect/documentation]] වෙතින් එක් සංශෝධනයක්
193416
wikitext
text/x-wiki
{{documentation subpage}}
This template cites JLect, a dictionary containing various Japonic words used in Japanese and Ryukyuan dialects.
===Usage===
* {{para|1}}
*: The entry name in kana and kanji.
* {{para|2}}
*: The ID in the JLect entry, following {{code|html|entry/}}.
* {{para|3}}
*: The romanized form, which is in the URL after the ID.
* {{para|accessdate}}
*: The access date.
===Example===
For "https://www.jlect.com/entry/670/kutuba/", the following parameters can be used:
{{temp|R:JLect|くとぅば【言葉・辞・詞・辭】|670|kutuba}} → {{R:JLect|くとぅば【言葉・辞・詞・辭】|670|kutuba}}
<includeonly>
{{reference template cat|ja|kzg|xug|mvi|ryn|okn|ryu|ams|tkn|rys|yoi|yox}}
</includeonly>
3bdq8o9h0s7eqntqm72xjw62q5w7kv7
ප්රවර්ගය:ඉක්මන් මකා දැමීම සඳහා යෝජිතයෝ
14
125496
193421
2024-11-21T09:45:16Z
Lee
19
Lee විසින් [[ප්රවර්ගය:ඉක්මන් මකා දැමීම සඳහා යෝජිතයෝ]] සිට [[ප්රවර්ගය:අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ]] වෙත පිටුව ගෙන යන ලදී
193421
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:අභියෝගයට ලක් නොවන මකා දැමීම සඳහා යෝජිතයෝ]]
sczjuzad7n77djoav3swf98uhk4ydrs
Module:pt-verb
828
125497
193425
2024-08-26T01:55:54Z
en>Ioaxxere
0
remove stray newlines
193425
Scribunto
text/plain
local export = {}
--[=[
Authorship: Ben Wing <benwing2>
]=]
--[=[
TERMINOLOGY:
-- "slot" = A particular combination of tense/mood/person/number/etc.
Example slot names for verbs are "pres_1s" (present indicative first-person singular), "pres_sub_2s" (present
subjunctive second-person singular) "impf_sub_3p" (imperfect subjunctive third-person plural).
Each slot is filled with zero or more forms.
-- "form" = The conjugated Portuguese form representing the value of a given slot.
-- "lemma" = The dictionary form of a given Portuguese term. For Portuguese, always the infinitive.
]=]
--[=[
FIXME:
--"i-e" alternation doesn't work properly when the stem comes with a hiatus in it.
--]=]
local force_cat = false -- set to true for debugging
local check_for_red_links = false -- set to false for debugging
local lang = require("Module:languages").getByCode("pt")
local m_str_utils = require("Module:string utilities")
local m_links = require("Module:links")
local m_table = require("Module:table")
local iut = require("Module:inflection utilities")
local com = require("Module:pt-common")
local format = m_str_utils.format
local remove_final_accent = com.remove_final_accent
local rfind = m_str_utils.find
local rmatch = m_str_utils.match
local rsplit = m_str_utils.split
local rsub = com.rsub
local u = m_str_utils.char
local function link_term(term)
return m_links.full_link({ lang = lang, term = term }, "term")
end
local V = com.V -- vowel regex class
local AV = com.AV -- accented vowel regex class
local C = com.C -- consonant regex class
local AC = u(0x0301) -- acute = ́
local TEMPC1 = u(0xFFF1) -- temporary character used for consonant substitutions
local TEMP_MESOCLITIC_INSERTION_POINT = u(0xFFF2) -- temporary character used to mark the mesoclitic insertion point
local VAR_BR = u(0xFFF3) -- variant code for Brazil
local VAR_PT = u(0xFFF4) -- variant code for Portugal
local VAR_SUPERSEDED = u(0xFFF5) -- variant code for superseded forms
local VAR_NORMAL = u(0xFFF6) -- variant code for non-superseded forms
local all_var_codes = VAR_BR .. VAR_PT .. VAR_SUPERSEDED .. VAR_NORMAL
local var_codes_no_superseded = VAR_BR .. VAR_PT .. VAR_NORMAL
local var_code_c = "[" .. all_var_codes .. "]"
local var_code_no_superseded_c = "[" .. var_codes_no_superseded .. "]"
local not_var_code_c = "[^" .. all_var_codes .. "]"
-- Export variant codes for use in [[Module:pt-inflections]].
export.VAR_BR = VAR_BR
export.VAR_PT = VAR_PT
export.VAR_SUPERSEDED = VAR_SUPERSEDED
export.VAR_NORMAL = VAR_NORMAL
local short_pp_footnote = "[usually used with auxiliary verbs " .. link_term("ser") .. " and " .. link_term("estar") .. "]"
local long_pp_footnote = "[usually used with auxiliary verbs " .. link_term("haver") .. " and " .. link_term("ter") .. "]"
--[=[
Vowel alternations:
<i-e>: 'i' in pres1s and the whole present subjunctive; 'e' elsewhere when stressed. Generally 'e' otherwise when
unstressed. E.g. [[sentir]], [[conseguir]] (the latter additionally with 'gu-g' alternation).
<u-o>: 'u' in pres1s and the whole present subjunctive; 'o' elsewhere when stressed. Either 'o' or 'u' otherwise when
unstressed. E.g. [[dormir]], [[subir]].
<i>: 'i' whenever stressed (in the present singular and third plural) and throughout the whole present subjunctive.
Otherwise 'e'. E.g. [[progredir]], also [[premir]] per Priberam.
<u>: 'u' whenever stressed (in the present singular and third plural) and throughout the whole present subjunctive.
Otherwise 'o'. E.g. [[polir]], [[extorquir]] (the latter also <u-o>).
<í>: The last 'i' of the stem (excluding stem-final 'i') becomes 'í' when stressed. E.g.:
* [[proibir]] ('proíbo, proíbe(s), proíbem, proíba(s), proíbam')
* [[faiscar]] ('faísco, faísca(s), faíscam, faísque(s), faísquem' also with 'c-qu' alternation)
* [[homogeneizar]] ('homogeneízo', etc.)
* [[mobiliar]] ('mobílio', etc.; note here the final -i is ignored when determining which vowel to stress)
* [[tuitar]] ('tuíto', etc.)
<ú>: The last 'u' of the stem (excluding stem-final 'u') becomes 'ú' when stressed. E.g.:
* [[reunir]] ('reúno, reúne(s), reúnem, reúna(s), reúnam')
* [[esmiuçar]] ('esmiúço, esmiúça(s), esmiúça, esmiúce(s), esmiúcem' also with 'ç-c' alternation)
* [[reusar]] ('reúso, reúsa(s), reúsa, reúse(s), reúsem')
* [[saudar]] ('saúdo, saúda(s), saúda, saúde(s), saúdem')
]=]
local vowel_alternants = m_table.listToSet({"i-e", "i", "í", "u-o", "u", "ú", "ei", "+"})
local vowel_alternant_to_desc = {
["i-e"] = "''i-e'' alternation in present singular",
["i"] = "''e'' becomes ''i'' when stressed",
["í"] = "''i'' becomes ''í'' when stressed",
["u-o"] = "''u-o'' alternation in present singular",
["u"] = "''o'' becomes ''u'' when stressed",
["ú"] = "''u'' becomes ''ú'' when stressed",
["ei"] = "''i'' becomes ''ei'' when stressed",
}
local vowel_alternant_to_cat = {
["i-e"] = "i-e alternation in present singular",
["i"] = "e becoming i when stressed",
["í"] = "i becoming í when stressed",
["u-o"] = "u-o alternation in present singular",
["u"] = "o becoming u when stressed",
["ú"] = "u becoming ú when stressed",
["ei"] = "i becoming ei when stressed",
}
local all_persons_numbers = {
["1s"] = "1|s",
["2s"] = "2|s",
["3s"] = "3|s",
["1p"] = "1|p",
["2p"] = "2|p",
["3p"] = "3|p",
}
local person_number_list = {"1s", "2s", "3s", "1p", "2p", "3p"}
local imp_person_number_list = {"2s", "3s", "1p", "2p", "3p"}
local neg_imp_person_number_list = {"2s", "3s", "1p", "2p", "3p"}
person_number_to_reflexive_pronoun = {
["1s"] = "me",
["2s"] = "te",
["3s"] = "se",
["1p"] = "nos",
["2p"] = "vos",
["3p"] = "se",
}
local indicator_flags = m_table.listToSet {
"no_pres_stressed", "no_pres1_and_sub",
"only3s", "only3sp", "only3p",
"pp_inv", "irreg", "no_built_in", "e_ei_cat",
}
-- Remove any variant codes e.g. VAR_BR, VAR_PT, VAR_SUPERSEDED. Needs to be called from [[Module:pt-headword]] on the
-- output of do_generate_forms(). `keep_superseded` leaves VAR_SUPERSEDED; used in the `canonicalize` function of
-- show_forms() because we then process and remove it in `generate_forms`. FIXME: Use metadata for this once it's
-- supported in [[Module:inflection utilities]].
function export.remove_variant_codes(form, keep_superseded)
return rsub(form, keep_superseded and var_code_no_superseded_c or var_code_c, "")
end
-- Initialize all the slots for which we generate forms.
local function add_slots(alternant_multiword_spec)
-- "Basic" slots: All slots that go into the regular table (not the reflexive form-of table).
alternant_multiword_spec.verb_slots_basic = {
{"infinitive", "inf"},
{"infinitive_linked", "inf"},
{"gerund", "ger"},
{"short_pp_ms", "short|m|s|past|part"},
{"short_pp_fs", "short|f|s|past|part"},
{"short_pp_mp", "short|m|p|past|part"},
{"short_pp_fp", "short|f|p|past|part"},
{"pp_ms", "m|s|past|part"},
{"pp_fs", "f|s|past|part"},
{"pp_mp", "m|p|past|part"},
{"pp_fp", "f|p|past|part"},
}
-- Special slots used to handle non-reflexive parts of reflexive verbs in {{pt-verb form of}}.
-- For example, for a reflexive-only verb like [[esbaldar-se]], we want to be able to use {{pt-verb form of}} on
-- [[esbalde]] (which should mention that it is a part of 'me esbalde', first-person singular present subjunctive,
-- and 'se esbalde', third-person singular present subjunctive) or on [[esbaldamos]] (which should mention that it
-- is a part of 'esbaldamo-nos', first-person plural present indicative or preterite). Similarly, we want to use
-- {{pt-verb form of}} on [[esbaldando]] (which should mention that it is a part of 'se ... esbaldando', syntactic
-- variant of [[esbaldando-se]], which is the gerund of [[esbaldar-se]]). To do this, we need to be able to map
-- non-reflexive parts like [[esbalde]], [[esbaldamos]], [[esbaldando]], etc. to their reflexive equivalent(s), to
-- the tag(s) of the equivalent(s), and, in the case of forms like [[esbaldando]], [[esbaldar]] and imperatives, to
-- the separated syntactic variant of the verb+clitic combination. We do this by creating slots for the
-- non-reflexive part equivalent of each basic reflexive slot, and for the separated syntactic-variant equivalent
-- of each basic reflexive slot that is formed of verb+clitic. We use slots in this way to deal with multiword
-- lemmas. Note that we run into difficulties mapping between reflexive verbs, non-reflexive part equivalents, and
-- separated syntactic variants if a slot contains more than one form. To handle this, if there are the same number
-- of forms in two slots we're trying to match up, we assume the forms match one-to-one; otherwise we don't match up
-- the two slots (which means {{pt-verb form of}} won't work in this case, but such a case is extremely rare and not
-- worth worrying about). Alternatives that handle this "properly" are significantly more complicated and require
-- non-trivial modifications to [[Module:inflection utilities]].
local need_special_verb_form_of_slots = alternant_multiword_spec.source_template == "pt-verb form of" and
alternant_multiword_spec.refl
if need_special_verb_form_of_slots then
alternant_multiword_spec.verb_slots_reflexive_verb_form_of = {
{"infinitive_non_reflexive", "-"},
{"infinitive_variant", "-"},
{"gerund_non_reflexive", "-"},
{"gerund_variant", "-"},
}
else
alternant_multiword_spec.verb_slots_reflexive_verb_form_of = {}
end
-- Add entries for a slot with person/number variants.
-- `verb_slots` is the table to add to.
-- `slot_prefix` is the prefix of the slot, typically specifying the tense/aspect.
-- `tag_suffix` is a string listing the set of inflection tags to add after the person/number tags.
-- `person_number_list` is a list of the person/number slot suffixes to add to `slot_prefix`.
local function add_personal_slot(verb_slots, slot_prefix, tag_suffix, person_number_list)
for _, persnum in ipairs(person_number_list) do
local persnum_tag = all_persons_numbers[persnum]
local slot = slot_prefix .. "_" .. persnum
local accel = persnum_tag .. "|" .. tag_suffix
table.insert(verb_slots, {slot, accel})
end
end
-- Add a personal slot (i.e. a slot with person/number variants) to `verb_slots_basic`.
local function add_basic_personal_slot(slot_prefix, tag_suffix, person_number_list, no_special_verb_form_of_slot)
add_personal_slot(alternant_multiword_spec.verb_slots_basic, slot_prefix, tag_suffix, person_number_list)
-- Add special slots for handling non-reflexive parts of reflexive verbs in {{pt-verb form of}}.
-- See comment above in `need_special_verb_form_of_slots`.
if need_special_verb_form_of_slots and not no_special_verb_form_of_slot then
for _, persnum in ipairs(person_number_list) do
local persnum_tag = all_persons_numbers[persnum]
local basic_slot = slot_prefix .. "_" .. persnum
local accel = persnum_tag .. "|" .. tag_suffix
table.insert(alternant_multiword_spec.verb_slots_reflexive_verb_form_of, {basic_slot .. "_non_reflexive", "-"})
end
end
end
add_basic_personal_slot("pres", "pres|ind", person_number_list)
add_basic_personal_slot("impf", "impf|ind", person_number_list)
add_basic_personal_slot("pret", "pret|ind", person_number_list)
add_basic_personal_slot("plup", "plup|ind", person_number_list)
add_basic_personal_slot("fut", "fut|ind", person_number_list)
add_basic_personal_slot("cond", "cond", person_number_list)
add_basic_personal_slot("pres_sub", "pres|sub", person_number_list)
add_basic_personal_slot("impf_sub", "impf|sub", person_number_list)
add_basic_personal_slot("fut_sub", "fut|sub", person_number_list)
add_basic_personal_slot("imp", "imp", imp_person_number_list)
add_basic_personal_slot("pers_inf", "pers|inf", person_number_list)
-- Don't need special non-reflexive-part slots because the negative imperative is multiword, of which the
-- individual words are 'não' + subjunctive.
add_basic_personal_slot("neg_imp", "neg|imp", neg_imp_person_number_list, "no special verb form of")
-- Don't need special non-reflexive-part slots because we don't want [[esbaldando]] mapping to [[esbaldando-me]]
-- (only [[esbaldando-se]]) or [[esbaldar]] mapping to [[esbaldar-me]] (only [[esbaldar-se]]).
add_basic_personal_slot("infinitive", "inf", person_number_list, "no special verb form of")
add_basic_personal_slot("gerund", "ger", person_number_list, "no special verb form of")
-- Generate the list of all slots.
alternant_multiword_spec.all_verb_slots = {}
for _, slot_and_accel in ipairs(alternant_multiword_spec.verb_slots_basic) do
table.insert(alternant_multiword_spec.all_verb_slots, slot_and_accel)
end
for _, slot_and_accel in ipairs(alternant_multiword_spec.verb_slots_reflexive_verb_form_of) do
table.insert(alternant_multiword_spec.all_verb_slots, slot_and_accel)
end
alternant_multiword_spec.verb_slots_basic_map = {}
for _, slotaccel in ipairs(alternant_multiword_spec.verb_slots_basic) do
local slot, accel = unpack(slotaccel)
alternant_multiword_spec.verb_slots_basic_map[slot] = accel
end
end
local overridable_stems = {}
local function allow_multiple_values(separated_groups, data)
local retvals = {}
for _, separated_group in ipairs(separated_groups) do
local footnotes = data.fetch_footnotes(separated_group)
local retval = {form = separated_group[1], footnotes = footnotes}
table.insert(retvals, retval)
end
return retvals
end
local function simple_choice(choices)
return function(separated_groups, data)
if #separated_groups > 1 then
data.parse_err("For spec '" .. data.prefix .. ":', only one value currently allowed")
end
if #separated_groups[1] > 1 then
data.parse_err("For spec '" .. data.prefix .. ":', no footnotes currently allowed")
end
local choice = separated_groups[1][1]
if not m_table.contains(choices, choice) then
data.parse_err("For spec '" .. data.prefix .. ":', saw value '" .. choice .. "' but expected one of '" ..
table.concat(choices, ",") .. "'")
end
return choice
end
end
for _, overridable_stem in ipairs {
"pres_unstressed",
"pres_stressed",
"pres1_and_sub",
-- Don't include pres1; use pres_1s if you need to override just that form
"impf",
"full_impf",
"pret_base",
"pret",
{"pret_conj", simple_choice({"irreg", "ar", "er", "ir"}) },
"fut",
"cond",
"pres_sub_stressed",
"pres_sub_unstressed",
{"sub_conj", simple_choice({"ar", "er"}) },
"plup",
"impf_sub",
"fut_sub",
"pers_inf",
"pp",
"short_pp",
} do
if type(overridable_stem) == "string" then
overridable_stems[overridable_stem] = allow_multiple_values
else
local stem, validator = unpack(overridable_stem)
overridable_stems[stem] = validator
end
end
-- Useful as the value of the `match` property of a built-in verb. `main_verb_spec` is a Lua pattern that should match
-- the non-prefixed part of a verb, and `prefix_specs` is a list of Lua patterns that should match the prefixed part of
-- a verb. If a prefix spec is preceded by ^, it must match exactly at the beginning of the verb; otherwise, additional
-- prefixes (e.g. re-, des-) may precede. Return the prefix and main verb.
local function match_against_verbs(main_verb_spec, prefix_specs)
return function(verb)
for _, prefix_spec in ipairs(prefix_specs) do
if prefix_spec:find("^%^") then
-- must match exactly
prefix_spec = prefix_spec:gsub("^%^", "")
if prefix_spec == "" then
-- We can't use the second branch of the if-else statement because an empty () returns the current position
-- in rmatch().
local main_verb = rmatch(verb, "^(" .. main_verb_spec .. ")$")
if main_verb then
return "", main_verb
end
else
local prefix, main_verb = rmatch(verb, "^(" .. prefix_spec .. ")(" .. main_verb_spec .. ")$")
if prefix then
return prefix, main_verb
end
end
else
local prefix, main_verb = rmatch(verb, "^(.*" .. prefix_spec .. ")(" .. main_verb_spec .. ")$")
if prefix then
return prefix, main_verb
end
end
end
return nil
end
end
--[=[
Built-in (usually irregular) conjugations.
Each entry is processed in turn and consists of an object with two fields:
1. match=: Specifies the built-in verbs that match this object.
2. forms=: Specifies the built-in stems and forms for these verbs.
The value of match= is either a string beginning with "^" (match only the specified verb), a string not beginning
with "^" (match any verb ending in that string), or a function that is passed in the verb and should return the prefix
of the verb if it matches, otherwise nil. The function match_against_verbs() is provided to facilitate matching a set
of verbs with a common ending and specific prefixes (e.g. [[ter]] and [[ater]] but not [[abater]], etc.).
The value of forms= is a table specifying stems and individual override forms. Each key of the table names either a
stem (e.g. `pres_stressed`), a stem property (e.g. `vowel_alt`) or an individual override form (e.g. `pres_1s`).
Each value of a stem can either be a string (a single stem), a list of strings, or a list of objects of the form
{form = STEM, footnotes = {FOONOTES}}. Each value of an individual override should be of exactly the same form except
that the strings specify full forms rather than stems. The values of a stem property depend on the specific property
but are generally strings or booleans.
In order to understand how the stem specifications work, it's important to understand the phonetic modifications done
by combine_stem_ending(). In general, the complexities of predictable prefix, stem and ending modifications are all
handled in this function. In particular:
1. Spelling-based modifications (c/z, g/gu, gu/gü, g/j) occur automatically as appropriate for the ending.
2. If the stem begins with an acute accent, the accent is moved onto the last vowel of the prefix (for handling verbs
in -uar such as [[minguar]], pres_3s 'míngua').
3. If the ending begins with a double asterisk, this is a signal to conditionally delete the accent on the last letter
of the stem. "Conditionally" means we don't do it if the last two letters would form a diphthong without the accent
on the second one (e.g. in [[sair]], with stem 'saí'); but as an exception, we do delete the accent in stems
ending in -guí, -quí (e.g. in [[conseguir]]) because in this case the ui isn't a diphthong.
4. If the ending begins with an asterisk, this is a signal to delete the accent on the last letter of the stem, e.g.
fizé -> fizermos. Unlike for **, this removal is unconditional, so we get e.g. 'sairmos' not #'saírmos'.
5. If ending begins with i, it must get an accent after an unstressed vowel (in some but not all cases) to prevent the
two merging into a diphthong. See combine_stem_ending() for specifics.
The following stems are recognized:
-- pres_unstressed: The present indicative unstressed stem (1p, 2p). Also controls the imperative 2p
and gerund. Defaults to the infinitive stem (minus the ending -ar/-er/-ir/-or).
-- pres_stressed: The present indicative stressed stem (1s, 2s, 3s, 3p). Also controls the imperative 2s.
Default is empty if indicator `no_pres_stressed`, else a vowel alternation if such an indicator is given
(e.g. `ue`, `ì`), else the infinitive stem.
-- pres1_and_sub: Overriding stem for 1s present indicative and the entire subjunctive. Only set by irregular verbs
and by the indicators `no_pres_stressed` (e.g. [[precaver]]) and `no_pres1_and_sub` (since verbs of this sort,
e.g. [[puir]], are missing the entire subjunctive as well as the 1s present indicative). Used by many irregular
verbs, e.g. [[caber]], verbs in '-air', [[dizer]], [[ter]], [[valer]], etc. Some verbs set this and then supply an
override for the pres_1sg if it's irregular, e.g. [[saber]], with irregular subjunctive stem "saib-" and special
1s present indicative "sei".
-- pres1: Special stem for 1s present indicative. Normally, do not set this explicitly. If you need to specify an
irregular 1s present indicative, use the form override pres_1s= to specify the entire form. Defaults to
pres1_and_sub if given, else pres_stressed.
-- pres_sub_unstressed: The present subjunctive unstressed stem (1p, 2p). Defaults to pres1_and_sub if given, else the
infinitive stem.
-- pres_sub_stressed: The present subjunctive stressed stem (1s, 2s, 3s, 1p). Defaults to pres1.
-- sub_conj: Determines the set of endings used in the subjunctive. Should be one of "ar" or "er".
-- impf: The imperfect stem (not including the -av-/-i- stem suffix, which is determined by the conjugation). Defaults
to the infinitive stem.
-- full_impf: The full imperfect stem missing only the endings (-a, -as, -am, etc.). Used for verbs with irregular
imperfects such as [[ser]], [[ter]], [[vir]] and [[pôr]]. Overrides must be supplied for the impf_1p and impf_2p
due to these forms having an accent on the stem.
-- pret_base: The preterite stem (not including the -a-/-e-/-i- stem suffix). Defaults to the infinitive stem.
-- pret: The full preterite stem missing only the endings (-ste, -mos, etc.). Used for verbs with irregular preterites
(pret_conj == "irreg") such as [[fazer]], [[poder]], [[trazer]], etc. Overrides must be supplied for the pret_1s
and pret_3s. Defaults to `pret_base` + the accented conjugation vowel.
-- pret_conj: Determines the set of endings used in the preterite. Should be one of "ar", "er", "ir" or "irreg".
Defaults to the conjugation as determined from the infinitive. When pret_conj == "irreg", stem `pret` is used,
otherwise `pret_base`.
-- fut: The future stem. Defaults to the infinitive stem + the unaccented conjugation vowel.
-- cond: The conditional stem. Defaults to `fut`.
-- impf_sub: The imperfect subjunctive stem. Defaults to `pret`.
-- fut_sub: The future subjunctive stem. Defaults to `pret`.
-- plup: The pluperfect stem. Defaults to `pret`.
-- pers_inf: The personal infinitive stem. Defaults to the infinitive stem + the accented conjugation vowel.
-- pp: The masculine singular past participle. Default is based on the verb conjugation: infinitive stem + "ado" for
-ar verbs, otherwise infinitive stem + "ido".
-- short_pp: The short masculine singular past participle, for verbs with such a form. No default.
-- pp_inv: True if the past participle exists only in the masculine singular.
]=]
local built_in_conjugations = {
--------------------------------------------------------------------------------------------
-- -ar --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- (1) Verbs with short past participles: need to specify the short pp explicitly.
--
-- aceitar: use <short_pp:aceito[Brazil],aceite[Portugal]>
-- anexar, completar, expressar, expulsar, findar, fritar, ganhar, gastar, limpar, pagar, pasmar, pegar, soltar:
-- use <short_pp:anexo> etc.
-- assentar: use <short_pp:assente>
-- entregar: use <short_pp:entregue>
-- enxugar: use <short_pp:enxuto>
-- matar: use <short_pp:morto>
--
-- (2) Verbs with orthographic consonant alternations: handled automatically.
--
-- -car (brincar, buscar, pecar, trancar, etc.): automatically handled in combine_stem_ending()
-- -çar (alcançar, começar, laçar): automatically handled in combine_stem_ending()
-- -gar (apagar, cegar, esmagar, largar, navegar, resmungar, sugar, etc.): automatically handled in combine_stem_ending()
--
-- (3) Verbs with vowel alternations: need to specify the alternation explicitly unless it always happens, in
-- which case it's handled automatically through an entry below.
--
-- esmiuçar changing to esmiúço: use <ú>
-- faiscar changing to faísco: use <í>
-- -iar changing to -eio (ansiar, incendiar, mediar, odiar, remediar, etc.): use <ei>
-- -izar changing to -ízo (ajuizar, enraizar, homogeneizar, plebeizar, etc.): use <í>
-- mobiliar changing to mobílio: use <í>
-- reusar changing to reúso: use <ú>
-- saudar changing to saúdo: use <ú>
-- tuitar/retuitar changing to (re)tuíto: use <í>
{
-- dar, desdar
match = match_against_verbs("dar", {"^", "^des", "^re"}),
forms = {
pres_1s = "dou",
pres_2s = "dás",
pres_3s = "dá",
-- damos, dais regular
pres_3p = "dão",
pret = "dé", pret_conj = "irreg", pret_1s = "dei", pret_3s = "deu",
pres_sub_1s = "dê",
pres_sub_2s = "dês",
pres_sub_3s = "dê",
pres_sub_1p = {"demos", "dêmos"},
-- deis regular
pres_sub_3p = {"deem", VAR_SUPERSEDED .. "dêem"},
irreg = true,
}
},
{
-- -ear (frear, nomear, semear, etc.)
match = "ear",
forms = {
pres_stressed = "ei",
e_ei_cat = true,
}
},
{
-- estar
match = match_against_verbs("estar", {"^", "sob", "sobr"}),
forms = {
pres_1s = "estou",
pres_2s = "estás",
pres_3s = "está",
-- FIXME, estámos is claimed as an alternative pres_1p in the old conjugation data, but I believe this is garbage
pres_3p = "estão",
pres1_and_sub = "estej", -- only for subjunctive as we override pres_1s
sub_conj = "er",
pret = "estivé", pret_conj = "irreg", pret_1s = "estive", pret_3s = "esteve",
-- [[sobestar]], [[sobrestar]] are transitive so they have fully inflected past participles
pp_inv = function(base, prefix) return prefix == "" end,
irreg = true,
}
},
{
-- It appears that only [[resfolegar]] has proparoxytone forms, not [[folegar]] or [[tresfolegar]].
match = "^resfolegar",
forms = {
pres_stressed = {"resfóleg", "resfoleg"},
irreg = true,
}
},
{
-- aguar/desaguar/enxaguar, ambiguar/apaziguar/averiguar, minguar, cheguar?? (obsolete variant of [[chegar]])
match = "guar",
forms = {
-- combine_stem_ending() will move the acute accent backwards so it sits after the last vowel in [[minguar]]
pres_stressed = {{form = AC .. "gu", footnotes = {"[Brazilian Portuguese]"}}, {form = "gu", footnotes = {"[European Portuguese]"}}},
pres_sub_stressed = {
{form = AC .. "gu", footnotes = {"[Brazilian Portuguese]"}},
{form = "gu", footnotes = {"[European Portuguese]"}},
{form = AC .. VAR_SUPERSEDED .. "gü", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "gú", footnotes = {"[European Portuguese]"}},
},
pres_sub_unstressed = {"gu", {form = VAR_SUPERSEDED .. "gü", footnotes = {"[Brazilian Portuguese]"}}},
pret_1s = {"guei", {form = VAR_SUPERSEDED .. "güei", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- adequar/readequar, antiquar/obliquar, apropinquar
match = "quar",
forms = {
-- combine_stem_ending() will move the acute accent backwards so it sits after the last vowel in [[apropinquar]]
pres_stressed = {{form = AC .. "qu", footnotes = {"[Brazilian Portuguese]"}}, {form = "qu", footnotes = {"[European Portuguese]"}}},
pres_sub_stressed = {
{form = AC .. "qu", footnotes = {"[Brazilian Portuguese]"}},
{form = "qu", footnotes = {"[European Portuguese]"}},
{form = AC .. VAR_SUPERSEDED .. "qü", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "qú", footnotes = {"[European Portuguese]"}},
},
pres_sub_unstressed = {"qu", {form = VAR_SUPERSEDED .. "qü", footnotes = {"[Brazilian Portuguese]"}}},
pret_1s = {"quei", {form = VAR_SUPERSEDED .. "qüei", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- -oar (abençoar, coroar, enjoar, perdoar, etc.)
match = "oar",
forms = {
pres_1s = {"oo", VAR_SUPERSEDED .. "ôo"},
}
},
{
-- -oiar (apoiar, boiar)
match = "oiar",
forms = {
pres_stressed = {"oi", {form = VAR_SUPERSEDED .. "ói", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- parar
match = "^parar",
forms = {
pres_3s = {"para", VAR_SUPERSEDED .. "pára"},
}
},
{
-- pelar
match = "^pelar",
forms = {
pres_1s = {"pelo", VAR_SUPERSEDED .. "pélo"},
pres_2s = {"pelas", VAR_SUPERSEDED .. "pélas"},
pres_3s = {"pela", VAR_SUPERSEDED .. "péla"},
}
},
--------------------------------------------------------------------------------------------
-- -er --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- precaver: use <no_pres_stressed>
-- -cer (verbs in -ecer, descer, vencer, etc.): automatically handled in combine_stem_ending()
-- -ger (proteger, reger, etc.): automatically handled in combine_stem_ending()
-- -guer (erguer/reerguer/soerguer): automatically handled in combine_stem_ending()
{
-- benzer
match = "benzer",
forms = {short_pp = "bento"}
},
{
-- caber
match = "caber",
forms = {
pres1_and_sub = "caib",
pret = "coubé", pret_1s = "coube", pret_3s = "coube", pret_conj = "irreg",
irreg = true,
}
},
{
-- crer, descrer
match = "crer",
forms = {
pres_2s = "crês", pres_3s = "crê",
pres_2p = "credes", pres_3p = {"creem", VAR_SUPERSEDED .. "crêem"},
pres1_and_sub = "crei",
irreg = true,
}
},
{
-- dizer, bendizer, condizer, contradizer, desdizer, maldizer, predizer, etc.
match = "dizer",
forms = {
-- use 'digu' because we're in a front environment; if we use 'dig', we'll get '#dijo'
pres1_and_sub = "digu", pres_3s = "diz",
pret = "dissé", pret_conj = "irreg", pret_1s = "disse", pret_3s = "disse", pp = "dito",
fut = "dir",
imp_2s = {"diz", "dize"}, -- per Infopédia
irreg = true,
}
},
{
-- eleger, reeleger
match = "eleger",
forms = {short_pp = "eleito"}
},
{
-- acender, prender; not desprender, etc.
match = match_against_verbs("ender", {"^ac", "^pr"}),
forms = {short_pp = "eso"}
},
{
-- fazer, afazer, contrafazer, desfazer, liquefazer, perfazer, putrefazer, rarefazer, refazer, satisfazer, tumefazer
match = "fazer",
forms = {
pres1_and_sub = "faç", pres_3s = "faz",
pret = "fizé", pret_conj = "irreg", pret_1s = "fiz", pret_3s = "fez", pp = "feito",
fut = "far",
imp_2s = {"faz", {form = "faze", footnotes = {"[Brazil only]"}}}, -- per Priberam
irreg = true,
}
},
{
match = "^haver",
forms = {
pres_1s = "hei",
pres_2s = "hás",
pres_3s = "há",
pres_1p = {"havemos", "hemos"},
pres_2p = {"haveis", "heis"},
pres_3p = "hão",
pres1_and_sub = "haj", -- only for subjunctive as we override pres_1s
pret = "houvé", pret_conj = "irreg", pret_1s = "houve", pret_3s = "houve",
imp_2p = "havei",
irreg = true,
}
},
-- reaver below under r-
{
-- jazer, adjazer
match = "jazer",
forms = {
pres_3s = "jaz",
imp_2s = {"jaz", "jaze"}, -- per Infopédia
irreg = true,
}
},
{
-- ler, reler, tresler; not excel(l)er, valer, etc.
match = match_against_verbs("ler", {"^", "^re", "tres"}),
forms = {
pres_2s = "lês", pres_3s = "lê",
pres_2p = "ledes", pres_3p = {"leem", VAR_SUPERSEDED .. "lêem"},
pres1_and_sub = "lei",
irreg = true,
}
},
{
-- morrer, desmorrer
match = "morrer",
forms = {short_pp = "morto"}
},
{
-- doer, moer/remoer, roer/corroer, soer
match = "oer",
forms = {
pres_1s = function(base, prefix)
return prefix ~= "s" and {"oo", VAR_SUPERSEDED .. "ôo"} or nil
end, pres_2s = "óis", pres_3s = "ói",
-- impf -ía etc., pret_1s -oí and pp -oído handled automatically in combine_stem_ending()
only3sp = function(base, prefix) return prefix == "d" end,
no_pres1_and_sub = function(base, prefix) return prefix == "s" end,
irreg = true,
}
},
{
-- perder
match = "perder",
forms = {
-- use 'perqu' because we're in a front environment; if we use 'perc', we'll get '#perço'
pres1_and_sub = "perqu",
irreg = true,
}
},
{
-- poder
match = "poder",
forms = {
pres1_and_sub = "poss",
pret = "pudé", pret_1s = "pude", pret_3s = "pôde", pret_conj = "irreg",
irreg = true,
}
},
{
-- prazer, aprazer, comprazer, desprazer
match = "prazer",
forms = {
pres_3s = "praz",
pret = "prouvé", pret_1s = "prouve", pret_3s = "prouve", pret_conj = "irreg",
only3sp = function(base, prefix) return not prefix:find("com$") end,
irreg = true,
}
},
-- prover below, just below ver
{
-- requerer; must precede querer
match = "requerer",
forms = {
-- old module claims alt pres_3s 'requere'; not in Priberam, Infopédia or conjugacao.com.br
pres_3s = "requer",
pres1_and_sub = "requeir",
imp_2s = {{form = "requere", footnotes = {"[Brazil only]"}}, "requer"}, -- per Priberam
-- regular preterite, unlike [[querer]]
irreg = true,
}
},
{
-- querer, desquerer, malquerer
match = "querer",
forms = {
-- old module claims alt pres_3s 'quere'; not in Priberam, Infopédia or conjugacao.com.br
pres_1s = "quero", pres_3s = "quer",
pres1_and_sub = "queir", -- only for subjunctive as we override pres_1s
pret = "quisé", pret_1s = "quis", pret_3s = "quis", pret_conj = "irreg",
imp_2s = {{form = "quere", footnotes = {"[Brazil only]"}}, {form = "quer", footnotes = {"[Brazil only]"}}}, -- per Priberam
irreg = true,
}
},
{
match = "reaver",
forms = {
no_pres_stressed = true,
pret = "reouvé", pret_conj = "irreg", pret_1s = "reouve", pret_3s = "reouve",
irreg = true,
}
},
{
-- saber, ressaber
match = "saber",
forms = {
pres_1s = "sei",
pres1_and_sub = "saib", -- only for subjunctive as we override pres_1s
pret = "soubé", pret_1s = "soube", pret_3s = "soube", pret_conj = "irreg",
irreg = true,
}
},
{
-- escrever/reescrever, circunscrever, descrever/redescrever, inscrever, prescrever, proscrever, subscrever,
-- transcrever, others?
match = "screver",
forms = {
pp = "scrito",
irreg = true,
}
},
{
-- suspender
match = "suspender",
forms = {short_pp = "suspenso"}
},
{
match = "^ser",
forms = {
pres_1s = "sou", pres_2s = "és", pres_3s = "é",
pres_1p = "somos", pres_2p = "sois", pres_3p = "são",
pres1_and_sub = "sej", -- only for subjunctive as we override pres_1s
full_impf = "er", impf_1p = "éramos", impf_2p = "éreis",
pret = "fô", pret_1s = "fui", pret_3s = "foi", pret_conj = "irreg",
imp_2s = "sê", imp_2p = "sede",
pp_inv = true,
irreg = true,
}
},
{
-- We want to match abster, conter, deter, etc. but not abater, cometer, etc. No way to avoid listing each verb.
match = match_against_verbs("ter", {"abs", "^a", "con", "de", "entre", "man", "ob", "^re", "sus", "^"}),
forms = {
pres_2s = function(base, prefix) return prefix == "" and "tens" or "téns" end,
pres_3s = function(base, prefix) return prefix == "" and "tem" or "tém" end,
pres_2p = "tendes", pres_3p = "têm",
pres1_and_sub = "tenh",
full_impf = "tinh", impf_1p = "tínhamos", impf_2p = "tínheis",
pret = "tivé", pret_1s = "tive", pret_3s = "teve", pret_conj = "irreg",
irreg = true,
}
},
{
match = "trazer",
forms = {
-- use 'tragu' because we're in a front environment; if we use 'trag', we'll get '#trajo'
pres1_and_sub = "tragu", pres_3s = "traz",
pret = "trouxé", pret_1s = "trouxe", pret_3s = "trouxe", pret_conj = "irreg",
fut = "trar",
irreg = true,
}
},
{
-- valer, desvaler, equivaler
match = "valer",
forms = {
pres1_and_sub = "valh",
irreg = true,
}
},
{
-- coerir, incoerir
--FIXME: This should be a part of the <i-e> section. It's an "i-e", but with accents to prevent a diphthong when it gets stressed.
match = "coerir",
forms = {
vowel_alt = "i-e",
pres1_and_sub = "coír",
pres_sub_unstressed = "coir",
}
},
{
-- We want to match antever etc. but not absolver, atrever etc. No way to avoid listing each verb.
match = match_against_verbs("ver", {"ante", "entre", "pre", "^re", "^"}),
forms = {
pres_2s = "vês", pres_3s = "vê",
pres_2p = "vedes", pres_3p = {"veem", VAR_SUPERSEDED .. "vêem"},
pres1_and_sub = "vej",
pret = "ví", pret_1s = "vi", pret_3s = "viu", pret_conj = "irreg",
pp = "visto",
irreg = true,
}
},
{
-- [[prover]] and [[desprover]] have regular preterite and past participle
match = "prover",
forms = {
pres_2s = "provês", pres_3s = "provê",
pres_2p = "provedes", pres_3p = {"proveem", VAR_SUPERSEDED .. "provêem"},
pres1_and_sub = "provej",
irreg = true,
}
},
{
-- Only envolver, revolver. Not volver, desenvolver, devolver, evolver, etc.
match = match_against_verbs("volver", {"^en", "^re"}),
forms = {short_pp = "volto"},
},
--------------------------------------------------------------------------------------------
-- -ir --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- abolir: per Priberam: <no_pres1_and_sub> for Brazil, use <u-o> for Portugal
-- barrir: use <only3sp>
-- carpir, colorir, demolir: use <no_pres1_and_sub>
-- descolorir: per Priberam: <no_pres_stressed> for Brazil, use <no_pres1_and_sub> for Portugal
-- delir, espavorir, falir, florir, remir, renhir: use <no_pres_stressed>
-- empedernir: per Priberam: <no_pres_stressed> for Brazil, use <i-e> for Portugal
-- transir: per Priberam: <no_pres_stressed> for Brazil, regular for Portugal
-- aspergir, despir, flectir/deflectir/genuflectir/genufletir/reflectir/refletir, mentir/desmentir,
-- sentir/assentir/consentir/dissentir/pressentir/ressentir, convergir/divergir, aderir/adherir,
-- ferir/auferir/conferir/deferir/desferir/diferir/differir/inferir/interferir/preferir/proferir/referir/transferir,
-- gerir/digerir/ingerir/sugerir, preterir, competir/repetir, servir, advertir/animadvertir/divertir,
-- vestir/investir/revestir/travestir, seguir/conseguir/desconseguir/desseguir/perseguir/prosseguir: use <i-e>
-- inerir: use <i-e> (per Infopédia, and per Priberam for Brazil), use <i-e.only3sp> (per Priberam for Portugal)
-- compelir/expelir/impelir/repelir: per Priberam: use <i-e> for Brazil, <no_pres1_and_sub> for Portugal (Infopédia
-- says <i-e>); NOTE: old module claims short_pp 'repulso' but none of Priberam, Infopédia and conjugacao.com.br agree
-- dormir, engolir, tossir, subir, acudir/sacudir, fugir, sumir/consumir (NOT assumir/presumir/resumir): use <u-o>
-- polir/repolir (claimed in old module to have no pres stressed, but Priberam disagrees for both Brazil and
-- Portugal; Infopédia lists repolir as completely regular and not like polir, but I think that's an error): use
-- <u>
-- premir: per Priberam: use <no_pres1_and_sub> for Brazil, <i> for Portugal (for Portugal, Priberam says
-- primo/primes/prime, while Infopédia says primo/premes/preme; Priberam is probably more reliable)
-- extorquir/retorquir use <no_pres1_and_sub> for Brazil, <u-o,u> for Portugal
-- agredir/progredir/regredir/transgredir: use <i>
-- denegrir, prevenir: use <i>
-- eclodir: per Priberam: regular in Brazil, <u-o.only3sp> in Portugal (Infopédia says regular)
-- cerzir: per Priberam: use <i> for Brazil, use <i-e> for Portugal (Infopédia says <i-e,i>)
-- cergir: per Priberam: use <i-e> for Brazil, no conjugation given for Portugal (Infopédia says <i-e>)
-- proibir/coibir: use <í>
-- reunir: use <ú>
-- parir/malparir: use <no_pres_stressed> (old module had pres_1s = {paro (1_defective), pairo (1_obsolete_alt)},
-- pres_2s = pares, pres_3s = pare, and subjunctive stem par- or pair-, but both Priberam and Infopédia agree
-- in these verbs being no_pres_stressed)
-- explodir/implodir: use <u-o> (claimed in old module to be <+,u-o> but neither Priberam nor Infopédia agree)
--
-- -cir alternations (aducir, ressarcir): automatically handled in combine_stem_ending()
-- -gir alternations (agir, dirigir, exigir): automatically handled in combine_stem_ending()
-- -guir alternations (e.g. conseguir): automatically handled in combine_stem_ending()
-- -quir alternations (e.g. extorquir): automatically handled in combine_stem_ending()
{
-- verbs in -air (cair, sair, trair and derivatives: decair/descair/recair, sobres(s)air,
-- abstrair/atrair/contrair/distrair/extrair/protrair/retrair/subtrair)
match = "air",
forms = {
pres1_and_sub = "ai", pres_2s = "ais", pres_3s = "ai",
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- abrir/desabrir/reabrir
match = "abrir",
forms = {pp = "aberto"}
},
{
-- cobrir/descobrir/encobrir/recobrir/redescobrir
match = "cobrir",
forms = {vowel_alt = "u-o", pp = "coberto"}
},
{
-- conduzir, produzir, reduzir, traduzir, etc.; luzir, reluzir, tremeluzir
match = "uzir",
forms = {
pres_3s = "uz",
imp_2s = {"uz", "uze"}, -- per Infopédia
irreg = true,
}
},
{
-- pedir, desimpedir, despedir, espedir, expedir, impedir
-- medir
-- comedir (per Priberam, no_pres_stressed in Brazil)
match = match_against_verbs("edir", {"m", "p"}),
forms = {
pres1_and_sub = "eç",
irreg = true,
}
},
{
-- frigir
match = "frigir",
forms = {vowel_alt = "i-e", short_pp = "frito"},
},
{
-- inserir
match = "inserir",
forms = {vowel_alt = "i-e", short_pp = {form = "inserto", footnotes = {"[European Portuguese only]"}}},
},
{
-- ir
match = "^ir",
forms = {
pres_1s = "vou", pres_2s = "vais", pres_3s = "vai",
pres_1p = "vamos", pres_2p = "ides", pres_3p = "vão",
pres_sub_1s = "vá", pres_sub_2s = "vás", pres_sub_3s = "vá",
pres_sub_1p = "vamos", pres_sub_2p = "vades", pres_sub_3p = "vão",
pret = "fô", pret_1s = "fui", pret_3s = "foi", pret_conj = "irreg",
irreg = true,
}
},
{
-- emergir, imergir, submergir
match = "mergir",
forms = {vowel_alt = {"i-e", "+"}, short_pp = "merso"},
},
{
match = "ouvir",
forms = {
pres1_and_sub = {"ouç", "oiç"},
irreg = true,
}
},
{
-- exprimir, imprimir, comprimir (but not descomprimir per Priberam), deprimir, oprimir/opprimir (but not reprimir,
-- suprimir/supprimir per Priberam)
match = match_against_verbs("primir", {"^com", "ex", "im", "de", "^o", "op"}),
forms = {short_pp = "presso"}
},
{
-- rir, sorrir
match = match_against_verbs("rir", {"^", "sor"}),
forms = {
pres_2s = "ris", pres_3s = "ri", pres_2p = "rides", pres_3p = "riem",
pres1_and_sub = "ri",
irreg = true,
}
},
{
-- distinguir, extinguir
match = "tinguir",
forms = {
short_pp = "tinto",
-- gu/g alternations handled in combine_stem_ending()
}
},
{
-- delinquir, arguir/redarguir
-- NOTE: The following is based on delinquir, with arguir/redarguir by parallelism.
-- In Priberam, delinquir and arguir are exactly parallel, but in Infopédia they aren't; only delinquir has
-- alternatives like 'delínques'. I assume this is because forms like 'delínques' are Brazilian and
-- Infopédia is from Portugal, so their coverage of Brazilian forms may be inconsistent.
match = match_against_verbs("uir", {"delinq", "arg"}),
forms = {
-- use 'ü' because we're in a front environment; if we use 'u', we'll get '#delinco', '#argo'
pres1_and_sub = {{form = AC .. "ü", footnotes = {"[Brazilian Portuguese]"}}, {form = "ü", footnotes = {"[European Portuguese]"}}},
-- FIXME: verify. This is by partial parallelism with the present subjunctive of verbs in -quar (also a
-- front environment). Infopédia has 'delinquis ou delínques' and Priberam has 'delinqúis'.
pres_2s = {
{form = AC .. "ues", footnotes = {"[Brazilian Portuguese]"}},
{form = "uis", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "ües", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úis", footnotes = {"[European Portuguese]"}},
},
-- Same as previous.
pres_3s = {
{form = AC .. "ue", footnotes = {"[Brazilian Portuguese]"}},
{form = "ui", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "üe", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úi", footnotes = {"[European Portuguese]"}},
},
-- Infopédia has 'delinquem ou delínquem' and Priberam has 'delinqúem'.
pres_3p = {
{form = AC .. "uem", footnotes = {"[Brazilian Portuguese]"}},
{form = "uem", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "üem", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úem", footnotes = {"[European Portuguese]"}},
},
-- FIXME: The old module also had several other alternative forms (given as [123]_alt, not identified as
-- obsolete):
-- impf: delinquia/delinquía, delinquias/delinquías, delinquia/delinquía, delinquíamos, delinquíeis, delinquiam/delinquíam
-- plup: delinquira/delinquíra, delinquiras/delinquíras, delinquira/delinquíra, delinquíramos, delinquíreis, delinquiram/delinquíram
-- pres_1p = delinquimos/delinquímos, pres_2p = delinquis/delinquís
-- pret = delinqui/delinquí, delinquiste/delinquíste, delinquiu, delinquimos/delinquímos, delinquistes/delinquístes, delinquiram/delinquíram
-- pers_inf = delinquir, delinquires, delinquir, delinquirmos, delinquirdes, delinquirem/delinquírem
-- fut_sub = delinquir, delinquires, delinquir, delinquirmos, delinquirdes, delinquirem/delinquírem
--
-- None of these alternative forms can be found in the Infopédia, Priberam, Collins or Reverso conjugation
-- tables, so their status is unclear, and I have omitted them.
}
},
{
-- verbs in -truir (construir, destruir, reconstruir) but not obstruir/desobstruir, instruir, which are handled
-- by the default -uir handler below
match = match_against_verbs("struir", {"con", "de"}),
forms = {
pres_2s = {"stróis", "struis"}, pres_3s = {"strói", "strui"}, pres_3p = {"stroem", "struem"},
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- verbs in -cluir (concluir, excluir, incluir): like -uir but has short_pp concluso etc. in Brazil
match = "cluir",
forms = {
pres_2s = "cluis", pres_3s = "clui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
short_pp = {form = "cluso", footnotes = {"[Brazil only]"}},
irreg = true,
}
},
{
-- puir, ruir: like -uir but defective in pres_1s, all pres sub
match = match_against_verbs("uir", {"^p", "^r"}),
forms = {
pres_2s = "uis", pres_3s = "ui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
no_pres1_and_sub = true,
irreg = true,
}
},
{
-- remaining verbs in -uir (concluir/excluir/incluir/concruir/concruyr, abluir/diluir, afluir/fluir/influir,
-- aluir, anuir, atribuir/contribuir/distribuir/redistribuir/retribuir/substituir, coevoluir/evoluir,
-- constituir/destituir/instituir/reconstituir/restituir, derruir, diminuir, estatuir, fruir/usufruir, imbuir,
-- imiscuir, poluir, possuir, pruir
-- FIXME: old module lists short pp incluso for incluir that can't be verified, ask about this
-- FIXME: handle -uyr verbs?
match = function(verb)
-- Don't match -guir verbs (e.g. [[seguir]], [[conseguir]]) or -quir verbs (e.g. [[extorquir]])
if verb:find("guir$") or verb:find("quir$") then
return nil
else
return match_against_verbs("uir", {""})(verb)
end
end,
forms = {
pres_2s = "uis", pres_3s = "ui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- We want to match advir, convir, devir, etc. but not ouvir, servir, etc. No way to avoid listing each verb.
match = match_against_verbs("vir", {"ad", "^a", "con", "contra", "de", "^desa", "inter", "pro", "^re", "sobre", "^"}),
forms = {
pres_2s = function(base, prefix) return prefix == "" and "vens" or "véns" end,
pres_3s = function(base, prefix) return prefix == "" and "vem" or "vém" end,
pres_2p = "vindes", pres_3p = "vêm",
pres1_and_sub = "venh",
full_impf = "vinh", impf_1p = "vínhamos", impf_2p = "vínheis",
pret = "vié", pret_1s = "vim", pret_3s = "veio", pret_conj = "irreg",
pp = "vindo",
irreg = true,
}
},
--------------------------------------------------------------------------------------------
-- misc --
--------------------------------------------------------------------------------------------
{
-- pôr, antepor, apor, compor/decompor/descompor, contrapor, depor, dispor, expor, impor, interpor, justapor,
-- opor, pospor, propor, repor, sobrepor, supor/pressupor, transpor, superseded forms like [[decompôr]], others?
match = "p[oô]r",
forms = {
pres1_and_sub = "ponh",
pres_2s = "pões", pres_3s = "põe", pres_1p = "pomos", pres_2p = "pondes", pres_3p = "põem",
full_impf = "punh", impf_1p = "púnhamos", impf_2p = "púnheis",
pret = "pusé", pret_1s = "pus", pret_3s = "pôs", pret_conj = "irreg",
pers_inf = "po",
gerund = "pondo", pp = "posto",
irreg = true,
}
},
}
local function skip_slot(base, slot, allow_overrides)
if not allow_overrides and (base.basic_overrides[slot] or
base.refl and base.basic_reflexive_only_overrides[slot]) then
-- Skip any slots for which there are overrides.
return true
end
if base.only3s and (slot:find("^pp_f") or slot:find("^pp_mp")) then
-- diluviar, atardecer, neviscar; impersonal verbs have only masc sing pp
return true
end
if not slot:find("[123]") then
-- Don't skip non-personal slots.
return false
end
if base.nofinite then
return true
end
if (base.only3s or base.only3sp or base.only3p) and (slot:find("^imp_") or slot:find("^neg_imp_")) then
return true
end
if base.only3s and not slot:find("3s") then
-- diluviar, atardecer, neviscar
return true
end
if base.only3sp and not slot:find("3[sp]") then
-- atañer, concernir
return true
end
if base.only3p and not slot:find("3p") then
-- [[caer cuatro gotas]], [[caer chuzos de punta]], [[entrarle los siete males]]
return true
end
return false
end
-- Apply vowel alternations to stem.
local function apply_vowel_alternations(stem, alternations)
local alternation_stems = {}
local saw_pres1_and_sub = false
local saw_pres_stressed = false
-- Process alternations other than +.
for _, altobj in ipairs(alternations) do
local alt = altobj.form
local pres1_and_sub, pres_stressed, err
-- Treat final -gu, -qu as a consonant, so the previous vowel can alternate (e.g. conseguir -> consigo).
-- This means a verb in -guar can't have a u-ú alternation but I don't think there are any verbs like that.
stem = rsub(stem, "([gq])u$", "%1" .. TEMPC1)
if alt == "+" then
-- do nothing yet
elseif alt == "ei" then
local before_last_vowel = rmatch(stem, "^(.*)i$")
if not before_last_vowel then
err = "stem should end in -i"
else
pres1_and_sub = nil
pres_stressed = before_last_vowel .. "ei"
end
else
local before_last_vowel, last_vowel, after_last_vowel = rmatch(stem, "^(.*)(" .. V .. ")(.-[ui])$")
if not before_last_vowel then
before_last_vowel, last_vowel, after_last_vowel = rmatch(stem, "^(.*)(" .. V .. ")(.-)$")
end
if alt == "i-e" then
if last_vowel == "e" or last_vowel == "i" then
pres1_and_sub = before_last_vowel .. "i" .. after_last_vowel
if last_vowel == "i" then
pres_stressed = before_last_vowel .. "e" .. after_last_vowel
end
else
err = "should have -e- or -i- as the last vowel"
end
elseif alt == "i" then
if last_vowel == "e" then
pres1_and_sub = before_last_vowel .. "i" .. after_last_vowel
pres_stressed = pres1_and_sub
else
err = "should have -e- as the last vowel"
end
elseif alt == "u-o" then
if last_vowel == "o" or last_vowel == "u" then
pres1_and_sub = before_last_vowel .. "u" .. after_last_vowel
if last_vowel == "u" then
pres_stressed = before_last_vowel .. "o" .. after_last_vowel
end
else
err = "should have -o- or -u- as the last vowel"
end
elseif alt == "u" then
if last_vowel == "o" then
pres1_and_sub = before_last_vowel .. "u" .. after_last_vowel
pres_stressed = pres1_and_sub
else
err = "should have -o- as the last vowel"
end
elseif alt == "í" then
if last_vowel == "i" then
pres_stressed = before_last_vowel .. "í" .. after_last_vowel
else
err = "should have -i- as the last vowel"
end
elseif alt == "ú" then
if last_vowel == "u" then
pres_stressed = before_last_vowel .. "ú" .. after_last_vowel
else
err = "should have -u- as the last vowel"
end
else
error("Internal error: Unrecognized vowel alternation '" .. alt .. "'")
end
end
if pres1_and_sub then
pres1_and_sub = {form = pres1_and_sub:gsub(TEMPC1, "u"), footnotes = altobj.footnotes}
saw_pres1_and_sub = true
end
if pres_stressed then
pres_stressed = {form = pres_stressed:gsub(TEMPC1, "u"), footnotes = altobj.footnotes}
saw_pres_stressed = true
end
table.insert(alternation_stems, {
altobj = altobj,
pres1_and_sub = pres1_and_sub,
pres_stressed = pres_stressed,
err = err
})
end
-- Now do +. We check to see which stems are used by other alternations and specify those so any footnotes are
-- properly attached.
for _, alternation_stem in ipairs(alternation_stems) do
if alternation_stem.altobj.form == "+" then
local stemobj = {form = stem, footnotes = alternation_stem.altobj.footnotes}
alternation_stem.pres1_and_sub = saw_pres1_and_sub and stemobj or nil
alternation_stem.pres_stressed = saw_pres_stressed and stemobj or nil
end
end
return alternation_stems
end
-- Add the `stem` to the `ending` for the given `slot` and apply any phonetic modifications.
-- WARNING: This function is written very carefully; changes to it can easily have unintended consequences.
local function combine_stem_ending(base, slot, prefix, stem, ending, dont_include_prefix)
-- If the stem begins with an acute accent, this is a signal to move the accent onto the last vowel of the prefix.
-- Cf. míngua of minguar.
if stem:find("^" .. AC) then
stem = rsub(stem, "^" .. AC, "")
if dont_include_prefix then
error("Internal error: Can't handle acute accent at beginning of stem if dont_include_prefix is given")
end
prefix = rsub(prefix, "([aeiouyAEIOUY])([^aeiouyAEIOUY]*)$", "%1" .. AC .. "%2")
end
-- Use the full stem for checking for -gui ending and such, because 'stem' is just 'u' for [[arguir]],
-- [[delinquir]].
local full_stem = prefix .. stem
-- Include the prefix in the stem unless dont_include_prefix is given (used for the past participle stem).
if not dont_include_prefix then
stem = prefix .. stem
end
-- If the ending begins with a double asterisk, this is a signal to conditionally delete the accent on the last letter
-- of the stem. "Conditionally" means we don't do it if the last two letters would form a diphthong without the accent
-- on the second one (e.g. in [[sair]], with stem 'saí'); but as an exception, we do delete the accent in stems
-- ending in -guí, -quí (e.g. in [[conseguir]]) because in this case the ui isn't a diphthong.
if ending:find("^%*%*") then
ending = rsub(ending, "^%*%*", "")
if rfind(full_stem, "[gq]uí$") or not rfind(full_stem, V .. "[íú]$") then
stem = remove_final_accent(stem)
end
end
-- If the ending begins with an asterisk, this is a signal to delete the accent on the last letter of the stem.
-- E.g. fizé -> fizermos. Unlike for **, this removal is unconditional, so we get e.g. 'sairmos' not #'saírmos'.
if ending:find("^%*") then
ending = rsub(ending, "^%*", "")
stem = remove_final_accent(stem)
end
-- If ending begins with i, it must get an accent after an unstressed vowel (in some but not all cases) to prevent
-- the two merging into a diphthong:
-- * cair ->
-- * pres: caímos, caís;
-- * impf: all forms (caí-);
-- * pret: caí, caíste (but not caiu), caímos, caístes, caíram;
-- * plup: all forms (caír-);
-- * impf_sub: all forms (caíss-);
-- * fut_sub: caíres, caírem (but not cair, cairmos, cairdes)
-- * pp: caído (but not gerund caindo)
-- * atribuir, other verbs in -uir -> same pattern as for cair etc.
-- * roer ->
-- * pret: roí
-- * impf: all forms (roí-)
-- * pp: roído
if ending:find("^i") and full_stem:find("[aeiou]$") and not full_stem:find("[gq]u$") and ending ~= "ir" and
ending ~= "iu" and ending ~= "indo" and not ending:find("^ir[md]") then
ending = ending:gsub("^i", "í")
end
-- Spelling changes in the stem; it depends on whether the stem given is the pre-front-vowel or
-- pre-back-vowel variant, as indicated by `frontback`. We want these front-back spelling changes to happen
-- between stem and ending, not between prefix and stem; the prefix may not have the same "front/backness"
-- as the stem.
local is_front = rfind(ending, "^[eiéíê]")
if base.frontback == "front" and not is_front then
stem = stem:gsub("c$", "ç") -- conhecer -> conheço, vencer -> venço, descer -> desço
stem = stem:gsub("g$", "j") -- proteger -> protejo, fugir -> fujo
stem = stem:gsub("gu$", "g") -- distinguir -> distingo, conseguir -> consigo
stem = stem:gsub("qu$", "c") -- extorquir -> exturco
stem = stem:gsub("([gq])ü$", "%1u") -- argüir (superseded) -> arguo, delinqüir (superseded) -> delinquo
elseif base.frontback == "back" and is_front then
-- The following changes are all superseded so we don't do them:
-- averiguar -> averigüei, minguar -> mingüei; antiquar -> antiqüei, apropinquar -> apropinqüei
-- stem = stem:gsub("([gq])u$", "%1ü")
stem = stem:gsub("g$", "gu") -- cargar -> carguei, apagar -> apaguei
stem = stem:gsub("c$", "qu") -- marcar -> marquei
stem = stem:gsub("ç$", "c") -- começar -> comecei
-- j does not go to g here; desejar -> deseje not #desege
end
return stem .. ending
end
local function add3(base, slot, stems, endings, footnotes, allow_overrides)
if skip_slot(base, slot, allow_overrides) then
return
end
local function do_combine_stem_ending(stem, ending)
return combine_stem_ending(base, slot, base.prefix, stem, ending)
end
iut.add_forms(base.forms, slot, stems, endings, do_combine_stem_ending, nil, nil, footnotes)
end
local function insert_form(base, slot, form)
if not skip_slot(base, slot) then
iut.insert_form(base.forms, slot, form)
end
end
local function insert_forms(base, slot, forms)
if not skip_slot(base, slot) then
iut.insert_forms(base.forms, slot, forms)
end
end
local function add_single_stem_tense(base, slot_pref, stems, s1, s2, s3, p1, p2, p3)
local function addit(slot, ending)
add3(base, slot_pref .. "_" .. slot, stems, ending)
end
addit("1s", s1)
addit("2s", s2)
addit("3s", s3)
addit("1p", p1)
addit("2p", p2)
addit("3p", p3)
end
local function construct_stems(base, vowel_alt)
local stems = {}
stems.pres_unstressed = base.stems.pres_unstressed or base.inf_stem
stems.pres_stressed =
-- If no_pres_stressed given, pres_stressed stem should be empty so no forms are generated.
base.no_pres_stressed and {} or
base.stems.pres_stressed or
vowel_alt.pres_stressed or
base.inf_stem
stems.pres1_and_sub =
-- If no_pres_stressed given, the entire subjunctive is missing.
base.no_pres_stressed and {} or
-- If no_pres1_and_sub given, pres1 and entire subjunctive are missing.
base.no_pres1_and_sub and {} or
base.stems.pres1_and_sub or
vowel_alt.pres1_and_sub or
nil
stems.pres1 = base.stems.pres1 or stems.pres1_and_sub or stems.pres_stressed
stems.impf = base.stems.impf or base.inf_stem
stems.full_impf = base.stems.full_impf
stems.pret_base = base.stems.pret_base or base.inf_stem
stems.pret = base.stems.pret or iut.map_forms(iut.convert_to_general_list_form(stems.pret_base), function(form)
return form .. base.conj_vowel end)
stems.pret_conj = base.stems.pret_conj or base.conj
stems.fut = base.stems.fut or base.inf_stem .. base.conj
stems.cond = base.stems.cond or stems.fut
stems.pres_sub_stressed = base.stems.pres_sub_stressed or stems.pres1
stems.pres_sub_unstressed = base.stems.pres_sub_unstressed or stems.pres1_and_sub or stems.pres_unstressed
stems.sub_conj = base.stems.sub_conj or base.conj
stems.plup = base.stems.plup or stems.pret
stems.impf_sub = base.stems.impf_sub or stems.pret
stems.fut_sub = base.stems.fut_sub or stems.pret
stems.pers_inf = base.stems.pers_inf or base.inf_stem .. base.conj_vowel
stems.pp = base.stems.pp or base.conj == "ar" and
combine_stem_ending(base, "pp_ms", base.prefix, base.inf_stem, "ado", "dont include prefix") or
-- use combine_stem_ending esp. so we get roído, caído, etc.
combine_stem_ending(base, "pp_ms", base.prefix, base.inf_stem, "ido", "dont include prefix")
stems.pp_ms = stems.pp
local function masc_to_fem(form)
if rfind(form, "o$") then
return rsub(form, "o$", "a")
else
return form
end
end
stems.pp_fs = iut.map_forms(iut.convert_to_general_list_form(stems.pp_ms), masc_to_fem)
if base.stems.short_pp then
stems.short_pp_ms = base.stems.short_pp
stems.short_pp_fs = iut.map_forms(iut.convert_to_general_list_form(stems.short_pp_ms), masc_to_fem)
end
base.this_stems = stems
end
local function add_present_indic(base)
local stems = base.this_stems
local function addit(slot, stems, ending)
add3(base, "pres_" .. slot, stems, ending)
end
local s2, s3, p1, p2, p3
if base.conj == "ar" then
s2, s3, p1, p2, p3 = "as", "a", "amos", "ais", "am"
elseif base.conj == "er" or base.conj == "or" then -- verbs in -por have the present overridden
s2, s3, p1, p2, p3 = "es", "e", "emos", "eis", "em"
elseif base.conj == "ir" then
s2, s3, p1, p2, p3 = "es", "e", "imos", "is", "em"
else
error("Internal error: Unrecognized conjugation " .. base.conj)
end
addit("1s", stems.pres1, "o")
addit("2s", stems.pres_stressed, s2)
addit("3s", stems.pres_stressed, s3)
addit("1p", stems.pres_unstressed, p1)
addit("2p", stems.pres_unstressed, p2)
addit("3p", stems.pres_stressed, p3)
end
local function add_present_subj(base)
local stems = base.this_stems
local function addit(slot, stems, ending)
add3(base, "pres_sub_" .. slot, stems, ending)
end
local s1, s2, s3, p1, p2, p3
if stems.sub_conj == "ar" then
s1, s2, s3, p1, p2, p3 = "e", "es", "e", "emos", "eis", "em"
else
s1, s2, s3, p1, p2, p3 = "a", "as", "a", "amos", "ais", "am"
end
addit("1s", stems.pres_sub_stressed, s1)
addit("2s", stems.pres_sub_stressed, s2)
addit("3s", stems.pres_sub_stressed, s3)
addit("1p", stems.pres_sub_unstressed, p1)
addit("2p", stems.pres_sub_unstressed, p2)
addit("3p", stems.pres_sub_stressed, p3)
end
local function add_finite_non_present(base)
local stems = base.this_stems
local function add_tense(slot, stem, s1, s2, s3, p1, p2, p3)
add_single_stem_tense(base, slot, stem, s1, s2, s3, p1, p2, p3)
end
if stems.full_impf then
-- An override needs to be supplied for the impf_1p and impf_2p due to the written accent on the stem.
add_tense("impf", stems.full_impf, "a", "as", "a", {}, {}, "am")
elseif base.conj == "ar" then
add_tense("impf", stems.impf, "ava", "avas", "ava", "ávamos", "áveis", "avam")
else
add_tense("impf", stems.impf, "ia", "ias", "ia", "íamos", "íeis", "iam")
end
-- * at the beginning of the ending means to remove a final accent from the preterite stem.
if stems.pret_conj == "irreg" then
add_tense("pret", stems.pret, {}, "*ste", {}, "*mos", "*stes", "*ram")
elseif stems.pret_conj == "ar" then
add_tense("pret", stems.pret_base, "ei", "aste", "ou",
{{form = VAR_BR .. "amos", footnotes = {"[Brazilian Portuguese]"}}, {form = VAR_PT .. "ámos", footnotes = {"[European Portuguese]"}}}, "astes", "aram")
elseif stems.pret_conj == "er" then
add_tense("pret", stems.pret_base, "i", "este", "eu", "emos", "estes", "eram")
else
add_tense("pret", stems.pret_base, "i", "iste", "iu", "imos", "istes", "iram")
end
-- * at the beginning of the ending means to remove a final accent from the stem.
-- ** is similar but is "conditional" on a consonant preceding the final vowel.
add_tense("plup", stems.plup, "**ra", "**ras", "**ra", "ramos", "reis", "**ram")
add_tense("impf_sub", stems.impf_sub, "**sse", "**sses", "**sse", "ssemos", "sseis", "**ssem")
add_tense("fut_sub", stems.fut_sub, "*r", "**res", "*r", "*rmos", "*rdes", "**rem")
local mark = TEMP_MESOCLITIC_INSERTION_POINT
add_tense("fut", stems.fut, mark .. "ei", mark .. "ás", mark .. "á", mark .. "emos", mark .. "eis", mark .. "ão")
add_tense("cond", stems.cond, mark .. "ia", mark .. "ias", mark .. "ia", mark .. "íamos", mark .. "íeis", mark .. "iam")
-- Different stems for different parts of the personal infinitive to correctly handle forms of [[sair]] and [[pôr]].
add_tense("pers_inf", base.non_prefixed_verb, "", {}, "", {}, {}, {})
add_tense("pers_inf", stems.pers_inf, {}, "**res", {}, "*rmos", "*rdes", "**rem")
end
local function add_non_finite_forms(base)
local stems = base.this_stems
local function addit(slot, stems, ending, footnotes)
add3(base, slot, stems, ending, footnotes)
end
insert_form(base, "infinitive", {form = base.verb})
-- Also insert "infinitive + reflexive pronoun" combinations if we're handling a reflexive verb. See comment below for
-- "gerund + reflexive pronoun" combinations.
if base.refl then
for _, persnum in ipairs(person_number_list) do
insert_form(base, "infinitive_" .. persnum, {form = base.verb})
end
end
-- verbs in -por have the gerund overridden
local ger_ending = base.conj == "ar" and "ando" or base.conj == "er" and "endo" or "indo"
addit("gerund", stems.pres_unstressed, ger_ending)
-- Also insert "gerund + reflexive pronoun" combinations if we're handling a reflexive verb. We insert exactly the same
-- form as for the bare gerund; later on in add_reflexive_or_fixed_clitic_to_forms(), we add the appropriate clitic
-- pronouns. It's important not to do this for non-reflexive verbs, because in that case, the clitic pronouns won't be
-- added, and {{pt-verb form of}} will wrongly consider all these combinations as possible inflections of the bare
-- gerund. Thanks to [[User:JeffDoozan]] for this bug fix.
if base.refl then
for _, persnum in ipairs(person_number_list) do
addit("gerund_" .. persnum, stems.pres_unstressed, ger_ending)
end
end
-- Skip the long/short past participle footnotes if called from {{pt-verb}} so they don't show in the headword.
local long_pp_footnotes =
stems.short_pp_ms and base.alternant_multiword_spec.source_template ~= "pt-verb" and {long_pp_footnote} or nil
addit("pp_ms", stems.pp_ms, "", long_pp_footnotes)
if not base.pp_inv then
addit("pp_fs", stems.pp_fs, "", long_pp_footnotes)
addit("pp_mp", stems.pp_ms, "s", long_pp_footnotes)
addit("pp_fp", stems.pp_fs, "s", long_pp_footnotes)
end
if stems.short_pp_ms then
local short_pp_footnotes =
stems.short_pp_ms and base.alternant_multiword_spec.source_template ~= "pt-verb" and {short_pp_footnote} or nil
addit("short_pp_ms", stems.short_pp_ms, "", short_pp_footnotes)
if not base.pp_inv then
addit("short_pp_fs", stems.short_pp_fs, "", short_pp_footnotes)
addit("short_pp_mp", stems.short_pp_ms, "s", short_pp_footnotes)
addit("short_pp_fp", stems.short_pp_fs, "s", short_pp_footnotes)
end
end
end
local function copy_forms_to_imperatives(base)
-- Copy pres3s to imperative since they are almost always the same.
insert_forms(base, "imp_2s", iut.map_forms(base.forms.pres_3s, function(form) return form end))
if not skip_slot(base, "imp_2p") then
-- Copy pres2p to imperative 2p minus -s since they are almost always the same.
-- But not if there's an override, to avoid possibly throwing an error.
insert_forms(base, "imp_2p", iut.map_forms(base.forms.pres_2p, function(form)
if not form:find("s$") then
error("Can't derive second-person plural imperative from second-person plural present indicative " ..
"because form '" .. form .. "' doesn't end in -s")
end
return rsub(form, "s$", "")
end))
end
-- Copy subjunctives to imperatives, unless there's an override for the given slot (as with the imp_1p of [[ir]]).
for _, persnum in ipairs({"3s", "1p", "3p"}) do
local from = "pres_sub_" .. persnum
local to = "imp_" .. persnum
insert_forms(base, to, iut.map_forms(base.forms[from], function(form) return form end))
end
end
local function process_slot_overrides(base, filter_slot, reflexive_only)
local overrides = reflexive_only and base.basic_reflexive_only_overrides or base.basic_overrides
for slot, forms in pairs(overrides) do
if not filter_slot or filter_slot(slot) then
add3(base, slot, forms, "", nil, "allow overrides")
end
end
end
-- Prefix `form` with `clitic`, adding fixed text `between` between them. Add links as appropriate unless the user
-- requested no links. Check whether form already has brackets (as will be the case if the form has a fixed clitic).
local function prefix_clitic_to_form(base, clitic, between, form)
if base.alternant_multiword_spec.args.noautolinkverb then
return clitic .. between .. form
else
local clitic_pref = "[[" .. clitic .. "]]" .. between
if form:find("%[%[") then
return clitic_pref .. form
else
return clitic_pref .. "[[" .. form .. "]]"
end
end
end
-- Add the appropriate clitic pronouns in `clitics` to the forms in `base_slot`. `store_cliticized_form` is a function
-- of three arguments (clitic, formobj, cliticized_form) and should store the cliticized form for the specified clitic
-- and form object.
local function suffix_clitic_to_forms(base, base_slot, clitics, store_cliticized_form)
if not base.forms[base_slot] then
-- This can happen, e.g. in only3s/only3sp/only3p verbs.
return
end
local autolink = not base.alternant_multiword_spec.args.noautolinkverb
for _, formobj in ipairs(base.forms[base_slot]) do
for _, clitic in ipairs(clitics) do
local cliticized_form
if formobj.form:find(TEMP_MESOCLITIC_INSERTION_POINT) then
-- mesoclisis in future and conditional
local infinitive, suffix = rmatch(formobj.form, "^(.*)" .. TEMP_MESOCLITIC_INSERTION_POINT .. "(.*)$")
if not infinitive then
error("Internal error: Can't find mesoclitic insertion point in slot '" .. base_slot .. "', form '" ..
formobj.form .. "'")
end
local full_form = infinitive .. suffix
if autolink and not infinitive:find("%[%[") then
infinitive = "[[" .. infinitive .. "]]"
end
cliticized_form =
autolink and infinitive .. "-[[" .. clitic .. "]]-[[" .. full_form .. "|" .. suffix .. "]]" or
infinitive .. "-" .. clitic .. "-" .. suffix
else
local clitic_suffix = autolink and "-[[" .. clitic .. "]]" or "-" .. clitic
local form_needs_link = autolink and not formobj.form:find("%[%[")
if base_slot:find("1p$") then
-- Final -s disappears: esbaldávamos + nos -> esbaldávamo-nos, etc.
cliticized_form = formobj.form:gsub("s$", "")
if form_needs_link then
cliticized_form = "[[" .. formobj.form .. "|" .. cliticized_form .. "]]"
end
else
cliticized_form = formobj.form
if form_needs_link then
cliticized_form = "[[" .. cliticized_form .. "]]"
end
end
cliticized_form = cliticized_form .. clitic_suffix
end
store_cliticized_form(clitic, formobj, cliticized_form)
end
end
end
-- Add a reflexive pronoun or fixed clitic (FIXME: not working), as appropriate to the base forms that were generated.
-- `do_joined` means to do only the forms where the pronoun is joined to the end of the form; otherwise, do only the
-- forms where it is not joined and precedes the form.
local function add_reflexive_or_fixed_clitic_to_forms(base, do_reflexive, do_joined)
for _, slotaccel in ipairs(base.alternant_multiword_spec.verb_slots_basic) do
local slot, accel = unpack(slotaccel)
local clitic
if not do_reflexive then
clitic = base.clitic
elseif slot:find("[123]") then
local persnum = slot:match("^.*_(.-)$")
clitic = person_number_to_reflexive_pronoun[persnum]
else
clitic = "se"
end
if base.forms[slot] then
if do_reflexive and slot:find("^pp_") or slot == "infinitive_linked" then
-- do nothing with reflexive past participles or with infinitive linked (handled at the end)
elseif slot:find("^neg_imp_") then
error("Internal error: Should not have forms set for negative imperative at this stage")
else
local slot_has_suffixed_clitic = not slot:find("_sub")
-- Maybe generate non-reflexive parts and separated syntactic variants for use in {{pt-verb form of}}.
-- See comment in add_slots() above `need_special_verb_form_of_slots`. Check for do_joined so we only
-- run this code once.
if do_reflexive and do_joined and base.alternant_multiword_spec.source_template == "pt-verb form of" and
-- Skip personal variants of infinitives and gerunds so we don't think [[esbaldando]] is a
-- non-reflexive equivalent of [[esbaldando-me]].
not slot:find("infinitive_") and not slot:find("gerund_") then
-- Clone the forms because we will be destructively modifying them just below, adding the reflexive
-- pronoun.
insert_forms(base, slot .. "_non_reflexive", mw.clone(base.forms[slot]))
if slot_has_suffixed_clitic then
insert_forms(base, slot .. "_variant", iut.map_forms(base.forms[slot], function(form)
return prefix_clitic_to_form(base, clitic, " ... ", form)
end))
end
end
if slot_has_suffixed_clitic then
if do_joined then
suffix_clitic_to_forms(base, slot, {clitic},
function(clitic, formobj, cliticized_form)
formobj.form = cliticized_form
end
)
end
elseif not do_joined then
-- Add clitic as separate word before all other forms.
for _, form in ipairs(base.forms[slot]) do
form.form = prefix_clitic_to_form(base, clitic, " ", form.form)
end
end
end
end
end
end
local function handle_infinitive_linked(base)
-- Compute linked versions of potential lemma slots, for use in {{pt-verb}}.
-- We substitute the original lemma (before removing links) for forms that
-- are the same as the lemma, if the original lemma has links.
for _, slot in ipairs({"infinitive"}) do
insert_forms(base, slot .. "_linked", iut.map_forms(base.forms[slot], function(form)
if form == base.lemma and rfind(base.linked_lemma, "%[%[") then
return base.linked_lemma
else
return form
end
end))
end
end
local function generate_negative_imperatives(base)
-- Copy subjunctives to negative imperatives, preceded by "não".
for _, persnum in ipairs(neg_imp_person_number_list) do
local from = "pres_sub_" .. persnum
local to = "neg_imp_" .. persnum
insert_forms(base, to, iut.map_forms(base.forms[from], function(form)
if base.alternant_multiword_spec.args.noautolinkverb then
return "não " .. form
elseif form:find("%[%[") then
-- already linked, e.g. when reflexive
return "[[não]] " .. form
else
return "[[não]] [[" .. form .. "]]"
end
end))
end
end
-- Process specs given by the user using 'addnote[SLOTSPEC][FOOTNOTE][FOOTNOTE][...]'.
local function process_addnote_specs(base)
for _, spec in ipairs(base.addnote_specs) do
for _, slot_spec in ipairs(spec.slot_specs) do
slot_spec = "^" .. slot_spec .. "$"
for slot, forms in pairs(base.forms) do
if rfind(slot, slot_spec) then
-- To save on memory, side-effect the existing forms.
for _, form in ipairs(forms) do
form.footnotes = iut.combine_footnotes(form.footnotes, spec.footnotes)
end
end
end
end
end
end
local function add_missing_links_to_forms(base)
-- Any forms without links should get them now. Redundant ones will be stripped later.
for slot, forms in pairs(base.forms) do
for _, form in ipairs(forms) do
if not form.form:find("%[%[") then
form.form = "[[" .. form.form .. "]]"
end
end
end
end
-- Remove special characters added to future and conditional forms to indicate mesoclitic insertion points.
local function remove_mesoclitic_insertion_points(base)
for slot, forms in pairs(base.forms) do
if slot:find("^fut_") or slot:find("^cond_") then
for _, form in ipairs(forms) do
form.form = form.form:gsub(TEMP_MESOCLITIC_INSERTION_POINT, "")
end
end
end
end
-- If called from {{pt-verb}}, remove superseded forms; otherwise add a footnote indicating they are superseded.
local function process_superseded_forms(base)
if base.alternant_multiword_spec.source_template == "pt-verb" then
for slot, forms in pairs(base.forms) do
-- As an optimization, check if there are any superseded forms and don't do anything if not.
local saw_superseded = false
for _, form in ipairs(forms) do
if form.form:find(VAR_SUPERSEDED) then
saw_superseded = true
break
end
end
if saw_superseded then
base.forms[slot] = iut.flatmap_forms(base.forms[slot], function(form)
if form:find(VAR_SUPERSEDED) then
return {}
else
return {form}
end
end)
end
end
else
for slot, forms in pairs(base.forms) do
for _, form in ipairs(forms) do
if form.form:find(VAR_SUPERSEDED) then
form.footnotes = iut.combine_footnotes(form.footnotes, {"[superseded]"})
end
end
end
end
end
local function conjugate_verb(base)
for _, vowel_alt in ipairs(base.vowel_alt_stems) do
construct_stems(base, vowel_alt)
add_present_indic(base)
add_present_subj(base)
end
add_finite_non_present(base)
add_non_finite_forms(base)
-- do non-reflexive non-imperative slot overrides
process_slot_overrides(base, function(slot)
return not slot:find("^imp_") and not slot:find("^neg_imp_")
end)
-- This should happen after process_slot_overrides() in case a derived slot is based on an override
-- (as with the imp_3s of [[dar]], [[estar]]).
copy_forms_to_imperatives(base)
-- do non-reflexive positive imperative slot overrides
process_slot_overrides(base, function(slot)
return slot:find("^imp_")
end)
-- We need to add joined reflexives, then joined and non-joined clitics, then non-joined reflexives, so we get
-- [[esbalda-te]] but [[não]] [[te]] [[esbalde]].
if base.refl then
-- This should happen after remove_monosyllabic_accents() so the * marking the preservation of monosyllabic
-- accents doesn't end up in the middle of a word.
add_reflexive_or_fixed_clitic_to_forms(base, "do reflexive", "do joined")
process_slot_overrides(base, nil, "do reflexive") -- do reflexive-only slot overrides
add_reflexive_or_fixed_clitic_to_forms(base, "do reflexive", false)
end
-- This should happen after add_reflexive_or_fixed_clitic_to_forms() so negative imperatives get the reflexive pronoun
-- and clitic in them.
generate_negative_imperatives(base)
-- do non-reflexive negative imperative slot overrides
-- FIXME: What about reflexive negative imperatives?
process_slot_overrides(base, function(slot)
return slot:find("^neg_imp_")
end)
-- This should happen before add_missing_links_to_forms() so that the comparison `form == base.lemma`
-- in handle_infinitive_linked() works correctly and compares unlinked forms to unlinked forms.
handle_infinitive_linked(base)
process_addnote_specs(base)
if not base.alternant_multiword_spec.args.noautolinkverb then
add_missing_links_to_forms(base)
end
remove_mesoclitic_insertion_points(base)
process_superseded_forms(base)
end
local function parse_indicator_spec(angle_bracket_spec)
-- Store the original angle bracket spec so we can reconstruct the overall conj spec with the lemma(s) in them.
local base = {
angle_bracket_spec = angle_bracket_spec,
user_basic_overrides = {},
user_stems = {},
addnote_specs = {},
}
local function parse_err(msg)
error(msg .. ": " .. angle_bracket_spec)
end
local function fetch_footnotes(separated_group)
local footnotes
for j = 2, #separated_group - 1, 2 do
if separated_group[j + 1] ~= "" then
parse_err("Extraneous text after bracketed footnotes: '" .. table.concat(separated_group) .. "'")
end
if not footnotes then
footnotes = {}
end
table.insert(footnotes, separated_group[j])
end
return footnotes
end
local inside = angle_bracket_spec:match("^<(.*)>$")
assert(inside)
if inside == "" then
return base
end
local segments = iut.parse_balanced_segment_run(inside, "[", "]")
local dot_separated_groups = iut.split_alternating_runs(segments, "%.")
for i, dot_separated_group in ipairs(dot_separated_groups) do
local first_element = dot_separated_group[1]
if first_element == "addnote" then
local spec_and_footnotes = fetch_footnotes(dot_separated_group)
if #spec_and_footnotes < 2 then
parse_err("Spec with 'addnote' should be of the form 'addnote[SLOTSPEC][FOOTNOTE][FOOTNOTE][...]'")
end
local slot_spec = table.remove(spec_and_footnotes, 1)
local slot_spec_inside = rmatch(slot_spec, "^%[(.*)%]$")
if not slot_spec_inside then
parse_err("Internal error: slot_spec " .. slot_spec .. " should be surrounded with brackets")
end
local slot_specs = rsplit(slot_spec_inside, ",")
-- FIXME: Here, [[Module:it-verb]] called strip_spaces(). Generally we don't do this. Should we?
table.insert(base.addnote_specs, {slot_specs = slot_specs, footnotes = spec_and_footnotes})
elseif indicator_flags[first_element] then
if #dot_separated_group > 1 then
parse_err("No footnotes allowed with '" .. first_element .. "' spec")
end
if base[first_element] then
parse_err("Spec '" .. first_element .. "' specified twice")
end
base[first_element] = true
elseif rfind(first_element, ":") then
local colon_separated_groups = iut.split_alternating_runs(dot_separated_group, "%s*:%s*")
local first_element = colon_separated_groups[1][1]
if #colon_separated_groups[1] > 1 then
parse_err("Can't attach footnotes directly to '" .. first_element .. "' spec; attach them to the " ..
"colon-separated values following the initial colon")
end
if overridable_stems[first_element] then
if base.user_stems[first_element] then
parse_err("Overridable stem '" .. first_element .. "' specified twice")
end
table.remove(colon_separated_groups, 1)
base.user_stems[first_element] = overridable_stems[first_element](colon_separated_groups,
{prefix = first_element, base = base, parse_err = parse_err, fetch_footnotes = fetch_footnotes})
else -- assume a basic override; we validate further later when the possible slots are available
if base.user_basic_overrides[first_element] then
parse_err("Basic override '" .. first_element .. "' specified twice")
end
table.remove(colon_separated_groups, 1)
base.user_basic_overrides[first_element] = allow_multiple_values(colon_separated_groups,
{prefix = first_element, base = base, parse_err = parse_err, fetch_footnotes = fetch_footnotes})
end
else
local comma_separated_groups = iut.split_alternating_runs(dot_separated_group, "%s*,%s*")
for j = 1, #comma_separated_groups do
local alt = comma_separated_groups[j][1]
if not vowel_alternants[alt] then
if #comma_separated_groups == 1 then
parse_err("Unrecognized spec or vowel alternant '" .. alt .. "'")
else
parse_err("Unrecognized vowel alternant '" .. alt .. "'")
end
end
if base.vowel_alt then
for _, existing_alt in ipairs(base.vowel_alt) do
if existing_alt.form == alt then
parse_err("Vowel alternant '" .. alt .. "' specified twice")
end
end
else
base.vowel_alt = {}
end
table.insert(base.vowel_alt, {form = alt, footnotes = fetch_footnotes(comma_separated_groups[j])})
end
end
end
return base
end
-- Normalize all lemmas, substituting the pagename for blank lemmas and adding links to multiword lemmas.
local function normalize_all_lemmas(alternant_multiword_spec, head)
-- (1) Add links to all before and after text. Remember the original text so we can reconstruct the verb spec later.
if not alternant_multiword_spec.args.noautolinktext then
iut.add_links_to_before_and_after_text(alternant_multiword_spec, "remember original")
end
-- (2) Remove any links from the lemma, but remember the original form
-- so we can use it below in the 'lemma_linked' form.
iut.map_word_specs(alternant_multiword_spec, function(base)
if base.lemma == "" then
base.lemma = head
end
base.user_specified_lemma = base.lemma
base.lemma = m_links.remove_links(base.lemma)
local refl_verb = base.lemma
local verb, refl = rmatch(refl_verb, "^(.-)%-(se)$")
if not verb then
verb, refl = refl_verb, nil
end
base.user_specified_verb = verb
base.refl = refl
base.verb = base.user_specified_verb
local linked_lemma
if alternant_multiword_spec.args.noautolinkverb or base.user_specified_lemma:find("%[%[") then
linked_lemma = base.user_specified_lemma
elseif base.refl then
-- Reconstruct the linked lemma with separate links around base verb and reflexive pronoun.
linked_lemma = base.user_specified_verb == base.verb and "[[" .. base.user_specified_verb .. "]]" or
"[[" .. base.verb .. "|" .. base.user_specified_verb .. "]]"
linked_lemma = linked_lemma .. (refl and "-[[" .. refl .. "]]" or "")
else
-- Add links to the lemma so the user doesn't specifically need to, since we preserve
-- links in multiword lemmas and include links in non-lemma forms rather than allowing
-- the entire form to be a link.
linked_lemma = iut.add_links(base.user_specified_lemma)
end
base.linked_lemma = linked_lemma
end)
end
local function detect_indicator_spec(base)
if (base.only3s and 1 or 0) + (base.only3sp and 1 or 0) + (base.only3p and 1 or 0) > 1 then
error("Only one of 'only3s', 'only3sp' and 'only3p' can be specified")
end
base.forms = {}
base.stems = {}
base.basic_overrides = {}
base.basic_reflexive_only_overrides = {}
if not base.no_built_in then
for _, built_in_conj in ipairs(built_in_conjugations) do
if type(built_in_conj.match) == "function" then
base.prefix, base.non_prefixed_verb = built_in_conj.match(base.verb)
elseif built_in_conj.match:find("^%^") and rsub(built_in_conj.match, "^%^", "") == base.verb then
-- begins with ^, for exact match, and matches
base.prefix, base.non_prefixed_verb = "", base.verb
else
base.prefix, base.non_prefixed_verb = rmatch(base.verb, "^(.*)(" .. built_in_conj.match .. ")$")
end
if base.prefix then
-- we found a built-in verb
for stem, forms in pairs(built_in_conj.forms) do
if type(forms) == "function" then
forms = forms(base, base.prefix)
end
if stem:find("^refl_") then
stem = stem:gsub("^refl_", "")
if not base.alternant_multiword_spec.verb_slots_basic_map[stem] then
error("Internal error: setting for 'refl_" .. stem .. "' does not refer to a basic verb slot")
end
base.basic_reflexive_only_overrides[stem] = forms
elseif base.alternant_multiword_spec.verb_slots_basic_map[stem] then
-- an individual form override of a basic form
base.basic_overrides[stem] = forms
else
base.stems[stem] = forms
end
end
break
end
end
end
-- Override built-in-verb stems and overrides with user-specified ones.
for stem, values in pairs(base.user_stems) do
base.stems[stem] = values
end
for override, values in pairs(base.user_basic_overrides) do
if not base.alternant_multiword_spec.verb_slots_basic_map[override] then
error("Unrecognized override '" .. override .. "': " .. base.angle_bracket_spec)
end
base.basic_overrides[override] = values
end
base.prefix = base.prefix or ""
base.non_prefixed_verb = base.non_prefixed_verb or base.verb
local inf_stem, suffix = rmatch(base.non_prefixed_verb, "^(.*)([aeioô]r)$")
if not inf_stem then
error("Unrecognized infinitive: " .. base.verb)
end
base.inf_stem = inf_stem
suffix = suffix == "ôr" and "or" or suffix
base.conj = suffix
base.conj_vowel = suffix == "ar" and "á" or suffix == "ir" and "í" or "ê"
base.frontback = suffix == "ar" and "back" or "front"
if base.stems.vowel_alt then -- built-in verb with specified vowel alternation
if base.vowel_alt then
error(base.verb .. " is a recognized built-in verb, and should not have vowel alternations specified with it")
end
base.vowel_alt = iut.convert_to_general_list_form(base.stems.vowel_alt)
end
-- Propagate built-in-verb indicator flags to `base` and combine with user-specified flags.
for indicator_flag, _ in pairs(indicator_flags) do
base[indicator_flag] = base[indicator_flag] or base.stems[indicator_flag]
end
-- Convert vowel alternation indicators into stems.
local vowel_alt = base.vowel_alt or {{form = "+"}}
base.vowel_alt_stems = apply_vowel_alternations(base.inf_stem, vowel_alt)
for _, vowel_alt_stems in ipairs(base.vowel_alt_stems) do
if vowel_alt_stems.err then
error("To use '" .. vowel_alt_stems.altobj.form .. "', present stem '" .. base.prefix .. base.inf_stem .. "' " ..
vowel_alt_stems.err)
end
end
end
local function detect_all_indicator_specs(alternant_multiword_spec)
-- Propagate some settings up; some are used internally, others by [[Module:pt-headword]].
iut.map_word_specs(alternant_multiword_spec, function(base)
-- Internal indicator flags. Do these before calling detect_indicator_spec() because add_slots() uses them.
for _, prop in ipairs { "refl", "clitic" } do
if base[prop] then
alternant_multiword_spec[prop] = true
end
end
base.alternant_multiword_spec = alternant_multiword_spec
end)
add_slots(alternant_multiword_spec)
alternant_multiword_spec.vowel_alt = {}
iut.map_word_specs(alternant_multiword_spec, function(base)
detect_indicator_spec(base)
-- User-specified indicator flags. Do these after calling detect_indicator_spec() because the latter may set these
-- indicators for built-in verbs.
for prop, _ in pairs(indicator_flags) do
if base[prop] then
alternant_multiword_spec[prop] = true
end
end
-- Vowel alternants. Do these after calling detect_indicator_spec() because the latter sets base.vowel_alt for
-- built-in verbs.
if base.vowel_alt then
for _, altobj in ipairs(base.vowel_alt) do
m_table.insertIfNot(alternant_multiword_spec.vowel_alt, altobj.form)
end
end
end)
end
local function add_categories_and_annotation(alternant_multiword_spec, base, multiword_lemma)
local function insert_ann(anntype, value)
m_table.insertIfNot(alternant_multiword_spec.annotation[anntype], value)
end
local function insert_cat(cat, also_when_multiword)
-- Don't place multiword terms in categories like 'Portuguese verbs ending in -ar' to avoid spamming the
-- categories with such terms.
if also_when_multiword or not multiword_lemma then
m_table.insertIfNot(alternant_multiword_spec.categories, "Portuguese " .. cat)
end
end
if check_for_red_links and alternant_multiword_spec.source_template == "pt-conj" and multiword_lemma then
for _, slot_and_accel in ipairs(alternant_multiword_spec.all_verb_slots) do
local slot = slot_and_accel[1]
local forms = base.forms[slot]
local must_break = false
if forms then
for _, form in ipairs(forms) do
if not form.form:find("%[%[") then
local title = mw.title.new(form.form)
if title and not title.exists then
insert_cat("verbs with red links in their inflection tables")
must_break = true
break
end
end
end
end
if must_break then
break
end
end
end
insert_cat("verbs ending in -" .. base.conj)
if base.irreg then
insert_ann("irreg", "irregular")
insert_cat("irregular verbs")
else
insert_ann("irreg", "regular")
end
if base.only3s then
insert_ann("defective", "impersonal")
insert_cat("impersonal verbs")
elseif base.only3sp then
insert_ann("defective", "third-person only")
insert_cat("third-person-only verbs")
elseif base.only3p then
insert_ann("defective", "third-person plural only")
insert_cat("third-person-plural-only verbs")
elseif base.no_pres_stressed or base.no_pres1_and_sub then
insert_ann("defective", "defective")
insert_cat("defective verbs")
else
insert_ann("defective", "regular")
end
if base.stems.short_pp then
insert_ann("short_pp", "irregular short past participle")
insert_cat("verbs with irregular short past participle")
else
insert_ann("short_pp", "regular")
end
if base.clitic then
insert_cat("verbs with lexical clitics")
end
if base.refl then
insert_cat("reflexive verbs")
end
if base.e_ei_cat then
insert_ann("vowel_alt", "''e'' becomes ''ei'' when stressed")
insert_cat("verbs with e becoming ei when stressed")
elseif not base.vowel_alt then
insert_ann("vowel_alt", "non-alternating")
else
for _, alt in ipairs(base.vowel_alt) do
if alt.form == "+" then
insert_ann("vowel_alt", "non-alternating")
else
insert_ann("vowel_alt", vowel_alternant_to_desc[alt.form])
insert_cat("verbs with " .. vowel_alternant_to_cat[alt.form])
end
end
end
local cons_alt = base.stems.cons_alt
if cons_alt == nil then
if base.conj == "ar" then
if base.inf_stem:find("ç$") then
cons_alt = "c-ç"
elseif base.inf_stem:find("c$") then
cons_alt = "c-qu"
elseif base.inf_stem:find("g$") then
cons_alt = "g-gu"
end
else
if base.no_pres_stressed or base.no_pres1_and_sub then
cons_alt = nil -- no e.g. c-ç alternation in this case
elseif base.inf_stem:find("c$") then
cons_alt = "c-ç"
elseif base.inf_stem:find("qu$") then
cons_alt = "c-qu"
elseif base.inf_stem:find("g$") then
cons_alt = "g-j"
elseif base.inf_stem:find("gu$") then
cons_alt = "g-gu"
end
end
end
if cons_alt then
local desc = cons_alt .. " alternation"
insert_ann("cons_alt", desc)
insert_cat("verbs with " .. desc)
else
insert_ann("cons_alt", "non-alternating")
end
end
-- Compute the categories to add the verb to, as well as the annotation to display in the
-- conjugation title bar. We combine the code to do these functions as both categories and
-- title bar contain similar information.
local function compute_categories_and_annotation(alternant_multiword_spec)
alternant_multiword_spec.categories = {}
local ann = {}
alternant_multiword_spec.annotation = ann
ann.irreg = {}
ann.short_pp = {}
ann.defective = {}
ann.vowel_alt = {}
ann.cons_alt = {}
local multiword_lemma = false
for _, form in ipairs(alternant_multiword_spec.forms.infinitive) do
if form.form:find(" ") then
multiword_lemma = true
break
end
end
iut.map_word_specs(alternant_multiword_spec, function(base)
add_categories_and_annotation(alternant_multiword_spec, base, multiword_lemma)
end)
local ann_parts = {}
local irreg = table.concat(ann.irreg, " or ")
if irreg ~= "" and irreg ~= "regular" then
table.insert(ann_parts, irreg)
end
local short_pp = table.concat(ann.short_pp, " or ")
if short_pp ~= "" and short_pp ~= "regular" then
table.insert(ann_parts, short_pp)
end
local defective = table.concat(ann.defective, " or ")
if defective ~= "" and defective ~= "regular" then
table.insert(ann_parts, defective)
end
local vowel_alt = table.concat(ann.vowel_alt, " or ")
if vowel_alt ~= "" and vowel_alt ~= "non-alternating" then
table.insert(ann_parts, vowel_alt)
end
local cons_alt = table.concat(ann.cons_alt, " or ")
if cons_alt ~= "" and cons_alt ~= "non-alternating" then
table.insert(ann_parts, cons_alt)
end
alternant_multiword_spec.annotation = table.concat(ann_parts, "; ")
end
local function show_forms(alternant_multiword_spec)
local lemmas = alternant_multiword_spec.forms.infinitive
alternant_multiword_spec.lemmas = lemmas -- save for later use in make_table()
if alternant_multiword_spec.forms.short_pp_ms then
alternant_multiword_spec.has_short_pp = true
end
local reconstructed_verb_spec = iut.reconstruct_original_spec(alternant_multiword_spec)
local function transform_accel_obj(slot, formobj, accel_obj)
-- No accelerators for negative imperatives, which are always multiword and derived directly from the
-- present subjunctive.
if slot:find("^neg_imp") then
return nil
end
if accel_obj then
if slot:find("^pp_") then
accel_obj.form = slot
elseif slot == "gerund" then
accel_obj.form = "gerund-" .. reconstructed_verb_spec
else
accel_obj.form = "verb-form-" .. reconstructed_verb_spec
end
end
return accel_obj
end
-- Italicize superseded forms.
local function generate_link(data)
local formval_for_link = data.form.formval_for_link
if formval_for_link:find(VAR_SUPERSEDED) then
formval_for_link = formval_for_link:gsub(VAR_SUPERSEDED, "")
return m_links.full_link({lang = lang, term = formval_for_link, tr = "-", accel = data.form.accel_obj},
"term") .. iut.get_footnote_text(data.form.footnotes, data.footnote_obj)
end
end
local props = {
lang = lang,
lemmas = lemmas,
transform_accel_obj = transform_accel_obj,
canonicalize = function(form) return export.remove_variant_codes(form, "keep superseded") end,
generate_link = generate_link,
slot_list = alternant_multiword_spec.verb_slots_basic,
}
iut.show_forms(alternant_multiword_spec.forms, props)
alternant_multiword_spec.footnote_basic = alternant_multiword_spec.forms.footnote
end
local notes_template = [=[
<div style="width:100%;text-align:left;background:#d9ebff">
<div style="display:inline-block;text-align:left;padding-left:1em;padding-right:1em">
{footnote}
</div></div>]=]
local basic_table = [=[
{description}<div class="NavFrame">
<div class="NavHead" align=center> Conjugation of {title} (See [[Appendix:Portuguese verbs]])</div>
<div class="NavContent" align="left">
{\op}| class="inflection-table" style="background:#F6F6F6; text-align: left; border: 1px solid #999999;" cellpadding="3" cellspacing="0"
|-
! style="border: 1px solid #999999; background:#B0B0B0" rowspan="2" |
! style="border: 1px solid #999999; background:#D0D0D0" colspan="3" | Singular
! style="border: 1px solid #999999; background:#D0D0D0" colspan="3" | Plural
|-
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | First-person<br />(<<eu>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Second-person<br />(<<tu>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Third-person<br />(<<ele>> / <<ela>> / <<você>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | First-person<br />(<<nós>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Second-person<br />(<<vós>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Third-person<br />(<<eles>> / <<elas>> / <<vocês>>)
|-
! style="border: 1px solid #999999; background:#c498ff" colspan="7" | ''<span title="infinitivo">Infinitive</span>''
|-
! style="border: 1px solid #999999; background:#a478df" | '''<span title="infinitivo impessoal">Impersonal</span>'''
| style="border: 1px solid #999999; vertical-align: top;" colspan="6" | {infinitive}
|-
! style="border: 1px solid #999999; background:#a478df" | '''<span title="infinitivo pessoal">Personal</span>'''
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_3p}
|-
! style="border: 1px solid #999999; background:#98ffc4" colspan="7" | ''<span title="gerúndio">Gerund</span>''
|-
| style="border: 1px solid #999999; background:#78dfa4" |
| style="border: 1px solid #999999; vertical-align: top;" colspan="6" | {gerund}
|-{pp_clause}
! style="border: 1px solid #999999; background:#d0dff4" colspan="7" | ''<span title="indicativo">Indicative</span>''
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="presente">Present</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pres_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito imperfeito">Imperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {impf_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito perfeito">Preterite</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pret_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito mais-que-perfeito simples">Pluperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {plup_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="futuro do presente">Future</span>
| style="border: 1px solid #999999; vertical-align: top;" | {fut_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="condicional / futuro do pretérito">Conditional</span>
| style="border: 1px solid #999999; vertical-align: top;" | {cond_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_3p}
|-
! style="border: 1px solid #999999; background:#d0f4d0" colspan="7" | ''<span title="conjuntivo (pt) / subjuntivo (br)">Subjunctive</span>''
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title=" presente do conjuntivo (pt) / subjuntivo (br)">Present</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_3p}
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title="pretérito imperfeito do conjuntivo (pt) / subjuntivo (br)">Imperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_3p}
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title="futuro do conjuntivo (pt) / subjuntivo (br)">Future</span>
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_3p}
|-
! style="border: 1px solid #999999; background:#f4e4d0" colspan="7" | ''<span title="imperativo">Imperative</span>''
|-
! style="border: 1px solid #999999; background:#d4c4b0" | <span title="imperativo afirmativo">Affirmative</span>
| style="border: 1px solid #999999; vertical-align: top;" rowspan="2" |
| style="border: 1px solid #999999; vertical-align: top;" | {imp_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_3p}
|-
! style="border: 1px solid #999999; background:#d4c4b0" | <span title="imperativo negativo">Negative</span> (<<não>>)
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_3p}
|{\cl}{notes_clause}</div></div>]=]
local double_pp_template = [=[
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio irregular">Short past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_fp}
|-
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio regular">Long past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fp}
|-]=]
local single_pp_template = [=[
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio passado">Past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fp}
|-]=]
local function make_table(alternant_multiword_spec)
local forms = alternant_multiword_spec.forms
forms.title = link_term(alternant_multiword_spec.lemmas[1].form)
if alternant_multiword_spec.annotation ~= "" then
forms.title = forms.title .. " (" .. alternant_multiword_spec.annotation .. ")"
end
forms.description = ""
-- Format the table.
forms.footnote = alternant_multiword_spec.footnote_basic
forms.notes_clause = forms.footnote ~= "" and format(notes_template, forms) or ""
-- has_short_pp is computed in show_forms().
local pp_template = alternant_multiword_spec.has_short_pp and double_pp_template or single_pp_template
forms.pp_clause = format(pp_template, forms)
local table_with_pronouns = rsub(basic_table, "<<(.-)>>", link_term)
return format(table_with_pronouns, forms)
end
-- Externally callable function to parse and conjugate a verb given user-specified arguments.
-- Return value is WORD_SPEC, an object where the conjugated forms are in `WORD_SPEC.forms`
-- for each slot. If there are no values for a slot, the slot key will be missing. The value
-- for a given slot is a list of objects {form=FORM, footnotes=FOOTNOTES}.
function export.do_generate_forms(args, source_template, headword_head)
local PAGENAME = mw.title.getCurrentTitle().text
local function in_template_space()
return mw.title.getCurrentTitle().nsText == "Template"
end
-- Determine the verb spec we're being asked to generate the conjugation of. This may be taken from the
-- current page title or the value of |pagename=; but not when called from {{pt-verb form of}}, where the
-- page title is a non-lemma form. Note that the verb spec may omit the infinitive; e.g. it may be "<i-e>".
-- For this reason, we use the value of `pagename` computed here down below, when calling normalize_all_lemmas().
local pagename = source_template ~= "pt-verb form of" and args.pagename or PAGENAME
local head = headword_head or pagename
local arg1 = args[1]
if not arg1 then
if (pagename == "pt-conj" or pagename == "pt-verb") and in_template_space() then
arg1 = "cergir<i-e,i>"
elseif pagename == "pt-verb form of" and in_template_space() then
arg1 = "amar"
else
arg1 = "<>"
end
end
-- When called from {{pt-verb form of}}, determine the non-lemma form whose inflections we're being asked to
-- determine. This normally comes from the page title or the value of |pagename=.
local verb_form_of_form
if source_template == "pt-verb form of" then
verb_form_of_form = args.pagename
if not verb_form_of_form then
if PAGENAME == "pt-verb form of" and in_template_space() then
verb_form_of_form = "ame"
else
verb_form_of_form = PAGENAME
end
end
end
local incorporated_headword_head_into_lemma = false
if arg1:find("^<.*>$") then -- missing lemma
if head:find(" ") then
-- If multiword lemma, try to add arg spec after the first word.
-- Try to preserve the brackets in the part after the verb, but don't do it
-- if there aren't the same number of left and right brackets in the verb
-- (which means the verb was linked as part of a larger expression).
local refl_clitic_verb, post = rmatch(head, "^(.-)( .*)$")
local left_brackets = rsub(refl_clitic_verb, "[^%[]", "")
local right_brackets = rsub(refl_clitic_verb, "[^%]]", "")
if #left_brackets == #right_brackets then
arg1 = iut.remove_redundant_links(refl_clitic_verb) .. arg1 .. post
incorporated_headword_head_into_lemma = true
else
-- Try again using the form without links.
local linkless_head = m_links.remove_links(head)
if linkless_head:find(" ") then
refl_clitic_verb, post = rmatch(linkless_head, "^(.-)( .*)$")
arg1 = refl_clitic_verb .. arg1 .. post
else
error("Unable to incorporate <...> spec into explicit head due to a multiword linked verb or " ..
"unbalanced brackets; please include <> explicitly: " .. arg1)
end
end
else
-- Will be incorporated through `head` below in the call to normalize_all_lemmas().
incorporated_headword_head_into_lemma = true
end
end
local function split_bracketed_runs_into_words(bracketed_runs)
return iut.split_alternating_runs(bracketed_runs, " ", "preserve splitchar")
end
local parse_props = {
parse_indicator_spec = parse_indicator_spec,
-- Split words only on spaces, not on hyphens, because that messes up reflexive verb parsing.
split_bracketed_runs_into_words = split_bracketed_runs_into_words,
allow_default_indicator = true,
allow_blank_lemma = true,
}
local alternant_multiword_spec = iut.parse_inflected_text(arg1, parse_props)
alternant_multiword_spec.pos = pos or "verbs"
alternant_multiword_spec.args = args
alternant_multiword_spec.source_template = source_template
alternant_multiword_spec.verb_form_of_form = verb_form_of_form
alternant_multiword_spec.incorporated_headword_head_into_lemma = incorporated_headword_head_into_lemma
normalize_all_lemmas(alternant_multiword_spec, head)
detect_all_indicator_specs(alternant_multiword_spec)
local inflect_props = {
slot_list = alternant_multiword_spec.all_verb_slots,
inflect_word_spec = conjugate_verb,
get_variants = function(form) return rsub(form, not_var_code_c, "") end,
-- We add links around the generated verbal forms rather than allow the entire multiword
-- expression to be a link, so ensure that user-specified links get included as well.
include_user_specified_links = true,
}
iut.inflect_multiword_or_alternant_multiword_spec(alternant_multiword_spec, inflect_props)
-- Remove redundant brackets around entire forms.
for slot, forms in pairs(alternant_multiword_spec.forms) do
for _, form in ipairs(forms) do
form.form = iut.remove_redundant_links(form.form)
end
end
compute_categories_and_annotation(alternant_multiword_spec)
if args.json and source_template == "pt-conj" then
return export.remove_variant_codes(require("Module:JSON").toJSON(alternant_multiword_spec.forms))
end
return alternant_multiword_spec
end
-- Entry point for {{pt-conj}}. Template-callable function to parse and conjugate a verb given
-- user-specified arguments and generate a displayable table of the conjugated forms.
function export.show(frame)
local parent_args = frame:getParent().args
local params = {
[1] = {},
["noautolinktext"] = {type = "boolean"},
["noautolinkverb"] = {type = "boolean"},
["pagename"] = {}, -- for testing/documentation pages
["json"] = {type = "boolean"}, -- for bot use
}
local args = require("Module:parameters").process(parent_args, params)
local alternant_multiword_spec = export.do_generate_forms(args, "pt-conj")
if type(alternant_multiword_spec) == "string" then
-- JSON return value
return alternant_multiword_spec
end
show_forms(alternant_multiword_spec)
return make_table(alternant_multiword_spec) ..
require("Module:utilities").format_categories(alternant_multiword_spec.categories, lang, nil, nil, force_cat)
end
return export
donscgok7v6jgxlbbd3vvj0ekfoi1ub
193426
193425
2024-11-21T10:20:51Z
Lee
19
[[:en:Module:pt-verb]] වෙතින් එක් සංශෝධනයක්
193425
Scribunto
text/plain
local export = {}
--[=[
Authorship: Ben Wing <benwing2>
]=]
--[=[
TERMINOLOGY:
-- "slot" = A particular combination of tense/mood/person/number/etc.
Example slot names for verbs are "pres_1s" (present indicative first-person singular), "pres_sub_2s" (present
subjunctive second-person singular) "impf_sub_3p" (imperfect subjunctive third-person plural).
Each slot is filled with zero or more forms.
-- "form" = The conjugated Portuguese form representing the value of a given slot.
-- "lemma" = The dictionary form of a given Portuguese term. For Portuguese, always the infinitive.
]=]
--[=[
FIXME:
--"i-e" alternation doesn't work properly when the stem comes with a hiatus in it.
--]=]
local force_cat = false -- set to true for debugging
local check_for_red_links = false -- set to false for debugging
local lang = require("Module:languages").getByCode("pt")
local m_str_utils = require("Module:string utilities")
local m_links = require("Module:links")
local m_table = require("Module:table")
local iut = require("Module:inflection utilities")
local com = require("Module:pt-common")
local format = m_str_utils.format
local remove_final_accent = com.remove_final_accent
local rfind = m_str_utils.find
local rmatch = m_str_utils.match
local rsplit = m_str_utils.split
local rsub = com.rsub
local u = m_str_utils.char
local function link_term(term)
return m_links.full_link({ lang = lang, term = term }, "term")
end
local V = com.V -- vowel regex class
local AV = com.AV -- accented vowel regex class
local C = com.C -- consonant regex class
local AC = u(0x0301) -- acute = ́
local TEMPC1 = u(0xFFF1) -- temporary character used for consonant substitutions
local TEMP_MESOCLITIC_INSERTION_POINT = u(0xFFF2) -- temporary character used to mark the mesoclitic insertion point
local VAR_BR = u(0xFFF3) -- variant code for Brazil
local VAR_PT = u(0xFFF4) -- variant code for Portugal
local VAR_SUPERSEDED = u(0xFFF5) -- variant code for superseded forms
local VAR_NORMAL = u(0xFFF6) -- variant code for non-superseded forms
local all_var_codes = VAR_BR .. VAR_PT .. VAR_SUPERSEDED .. VAR_NORMAL
local var_codes_no_superseded = VAR_BR .. VAR_PT .. VAR_NORMAL
local var_code_c = "[" .. all_var_codes .. "]"
local var_code_no_superseded_c = "[" .. var_codes_no_superseded .. "]"
local not_var_code_c = "[^" .. all_var_codes .. "]"
-- Export variant codes for use in [[Module:pt-inflections]].
export.VAR_BR = VAR_BR
export.VAR_PT = VAR_PT
export.VAR_SUPERSEDED = VAR_SUPERSEDED
export.VAR_NORMAL = VAR_NORMAL
local short_pp_footnote = "[usually used with auxiliary verbs " .. link_term("ser") .. " and " .. link_term("estar") .. "]"
local long_pp_footnote = "[usually used with auxiliary verbs " .. link_term("haver") .. " and " .. link_term("ter") .. "]"
--[=[
Vowel alternations:
<i-e>: 'i' in pres1s and the whole present subjunctive; 'e' elsewhere when stressed. Generally 'e' otherwise when
unstressed. E.g. [[sentir]], [[conseguir]] (the latter additionally with 'gu-g' alternation).
<u-o>: 'u' in pres1s and the whole present subjunctive; 'o' elsewhere when stressed. Either 'o' or 'u' otherwise when
unstressed. E.g. [[dormir]], [[subir]].
<i>: 'i' whenever stressed (in the present singular and third plural) and throughout the whole present subjunctive.
Otherwise 'e'. E.g. [[progredir]], also [[premir]] per Priberam.
<u>: 'u' whenever stressed (in the present singular and third plural) and throughout the whole present subjunctive.
Otherwise 'o'. E.g. [[polir]], [[extorquir]] (the latter also <u-o>).
<í>: The last 'i' of the stem (excluding stem-final 'i') becomes 'í' when stressed. E.g.:
* [[proibir]] ('proíbo, proíbe(s), proíbem, proíba(s), proíbam')
* [[faiscar]] ('faísco, faísca(s), faíscam, faísque(s), faísquem' also with 'c-qu' alternation)
* [[homogeneizar]] ('homogeneízo', etc.)
* [[mobiliar]] ('mobílio', etc.; note here the final -i is ignored when determining which vowel to stress)
* [[tuitar]] ('tuíto', etc.)
<ú>: The last 'u' of the stem (excluding stem-final 'u') becomes 'ú' when stressed. E.g.:
* [[reunir]] ('reúno, reúne(s), reúnem, reúna(s), reúnam')
* [[esmiuçar]] ('esmiúço, esmiúça(s), esmiúça, esmiúce(s), esmiúcem' also with 'ç-c' alternation)
* [[reusar]] ('reúso, reúsa(s), reúsa, reúse(s), reúsem')
* [[saudar]] ('saúdo, saúda(s), saúda, saúde(s), saúdem')
]=]
local vowel_alternants = m_table.listToSet({"i-e", "i", "í", "u-o", "u", "ú", "ei", "+"})
local vowel_alternant_to_desc = {
["i-e"] = "''i-e'' alternation in present singular",
["i"] = "''e'' becomes ''i'' when stressed",
["í"] = "''i'' becomes ''í'' when stressed",
["u-o"] = "''u-o'' alternation in present singular",
["u"] = "''o'' becomes ''u'' when stressed",
["ú"] = "''u'' becomes ''ú'' when stressed",
["ei"] = "''i'' becomes ''ei'' when stressed",
}
local vowel_alternant_to_cat = {
["i-e"] = "i-e alternation in present singular",
["i"] = "e becoming i when stressed",
["í"] = "i becoming í when stressed",
["u-o"] = "u-o alternation in present singular",
["u"] = "o becoming u when stressed",
["ú"] = "u becoming ú when stressed",
["ei"] = "i becoming ei when stressed",
}
local all_persons_numbers = {
["1s"] = "1|s",
["2s"] = "2|s",
["3s"] = "3|s",
["1p"] = "1|p",
["2p"] = "2|p",
["3p"] = "3|p",
}
local person_number_list = {"1s", "2s", "3s", "1p", "2p", "3p"}
local imp_person_number_list = {"2s", "3s", "1p", "2p", "3p"}
local neg_imp_person_number_list = {"2s", "3s", "1p", "2p", "3p"}
person_number_to_reflexive_pronoun = {
["1s"] = "me",
["2s"] = "te",
["3s"] = "se",
["1p"] = "nos",
["2p"] = "vos",
["3p"] = "se",
}
local indicator_flags = m_table.listToSet {
"no_pres_stressed", "no_pres1_and_sub",
"only3s", "only3sp", "only3p",
"pp_inv", "irreg", "no_built_in", "e_ei_cat",
}
-- Remove any variant codes e.g. VAR_BR, VAR_PT, VAR_SUPERSEDED. Needs to be called from [[Module:pt-headword]] on the
-- output of do_generate_forms(). `keep_superseded` leaves VAR_SUPERSEDED; used in the `canonicalize` function of
-- show_forms() because we then process and remove it in `generate_forms`. FIXME: Use metadata for this once it's
-- supported in [[Module:inflection utilities]].
function export.remove_variant_codes(form, keep_superseded)
return rsub(form, keep_superseded and var_code_no_superseded_c or var_code_c, "")
end
-- Initialize all the slots for which we generate forms.
local function add_slots(alternant_multiword_spec)
-- "Basic" slots: All slots that go into the regular table (not the reflexive form-of table).
alternant_multiword_spec.verb_slots_basic = {
{"infinitive", "inf"},
{"infinitive_linked", "inf"},
{"gerund", "ger"},
{"short_pp_ms", "short|m|s|past|part"},
{"short_pp_fs", "short|f|s|past|part"},
{"short_pp_mp", "short|m|p|past|part"},
{"short_pp_fp", "short|f|p|past|part"},
{"pp_ms", "m|s|past|part"},
{"pp_fs", "f|s|past|part"},
{"pp_mp", "m|p|past|part"},
{"pp_fp", "f|p|past|part"},
}
-- Special slots used to handle non-reflexive parts of reflexive verbs in {{pt-verb form of}}.
-- For example, for a reflexive-only verb like [[esbaldar-se]], we want to be able to use {{pt-verb form of}} on
-- [[esbalde]] (which should mention that it is a part of 'me esbalde', first-person singular present subjunctive,
-- and 'se esbalde', third-person singular present subjunctive) or on [[esbaldamos]] (which should mention that it
-- is a part of 'esbaldamo-nos', first-person plural present indicative or preterite). Similarly, we want to use
-- {{pt-verb form of}} on [[esbaldando]] (which should mention that it is a part of 'se ... esbaldando', syntactic
-- variant of [[esbaldando-se]], which is the gerund of [[esbaldar-se]]). To do this, we need to be able to map
-- non-reflexive parts like [[esbalde]], [[esbaldamos]], [[esbaldando]], etc. to their reflexive equivalent(s), to
-- the tag(s) of the equivalent(s), and, in the case of forms like [[esbaldando]], [[esbaldar]] and imperatives, to
-- the separated syntactic variant of the verb+clitic combination. We do this by creating slots for the
-- non-reflexive part equivalent of each basic reflexive slot, and for the separated syntactic-variant equivalent
-- of each basic reflexive slot that is formed of verb+clitic. We use slots in this way to deal with multiword
-- lemmas. Note that we run into difficulties mapping between reflexive verbs, non-reflexive part equivalents, and
-- separated syntactic variants if a slot contains more than one form. To handle this, if there are the same number
-- of forms in two slots we're trying to match up, we assume the forms match one-to-one; otherwise we don't match up
-- the two slots (which means {{pt-verb form of}} won't work in this case, but such a case is extremely rare and not
-- worth worrying about). Alternatives that handle this "properly" are significantly more complicated and require
-- non-trivial modifications to [[Module:inflection utilities]].
local need_special_verb_form_of_slots = alternant_multiword_spec.source_template == "pt-verb form of" and
alternant_multiword_spec.refl
if need_special_verb_form_of_slots then
alternant_multiword_spec.verb_slots_reflexive_verb_form_of = {
{"infinitive_non_reflexive", "-"},
{"infinitive_variant", "-"},
{"gerund_non_reflexive", "-"},
{"gerund_variant", "-"},
}
else
alternant_multiword_spec.verb_slots_reflexive_verb_form_of = {}
end
-- Add entries for a slot with person/number variants.
-- `verb_slots` is the table to add to.
-- `slot_prefix` is the prefix of the slot, typically specifying the tense/aspect.
-- `tag_suffix` is a string listing the set of inflection tags to add after the person/number tags.
-- `person_number_list` is a list of the person/number slot suffixes to add to `slot_prefix`.
local function add_personal_slot(verb_slots, slot_prefix, tag_suffix, person_number_list)
for _, persnum in ipairs(person_number_list) do
local persnum_tag = all_persons_numbers[persnum]
local slot = slot_prefix .. "_" .. persnum
local accel = persnum_tag .. "|" .. tag_suffix
table.insert(verb_slots, {slot, accel})
end
end
-- Add a personal slot (i.e. a slot with person/number variants) to `verb_slots_basic`.
local function add_basic_personal_slot(slot_prefix, tag_suffix, person_number_list, no_special_verb_form_of_slot)
add_personal_slot(alternant_multiword_spec.verb_slots_basic, slot_prefix, tag_suffix, person_number_list)
-- Add special slots for handling non-reflexive parts of reflexive verbs in {{pt-verb form of}}.
-- See comment above in `need_special_verb_form_of_slots`.
if need_special_verb_form_of_slots and not no_special_verb_form_of_slot then
for _, persnum in ipairs(person_number_list) do
local persnum_tag = all_persons_numbers[persnum]
local basic_slot = slot_prefix .. "_" .. persnum
local accel = persnum_tag .. "|" .. tag_suffix
table.insert(alternant_multiword_spec.verb_slots_reflexive_verb_form_of, {basic_slot .. "_non_reflexive", "-"})
end
end
end
add_basic_personal_slot("pres", "pres|ind", person_number_list)
add_basic_personal_slot("impf", "impf|ind", person_number_list)
add_basic_personal_slot("pret", "pret|ind", person_number_list)
add_basic_personal_slot("plup", "plup|ind", person_number_list)
add_basic_personal_slot("fut", "fut|ind", person_number_list)
add_basic_personal_slot("cond", "cond", person_number_list)
add_basic_personal_slot("pres_sub", "pres|sub", person_number_list)
add_basic_personal_slot("impf_sub", "impf|sub", person_number_list)
add_basic_personal_slot("fut_sub", "fut|sub", person_number_list)
add_basic_personal_slot("imp", "imp", imp_person_number_list)
add_basic_personal_slot("pers_inf", "pers|inf", person_number_list)
-- Don't need special non-reflexive-part slots because the negative imperative is multiword, of which the
-- individual words are 'não' + subjunctive.
add_basic_personal_slot("neg_imp", "neg|imp", neg_imp_person_number_list, "no special verb form of")
-- Don't need special non-reflexive-part slots because we don't want [[esbaldando]] mapping to [[esbaldando-me]]
-- (only [[esbaldando-se]]) or [[esbaldar]] mapping to [[esbaldar-me]] (only [[esbaldar-se]]).
add_basic_personal_slot("infinitive", "inf", person_number_list, "no special verb form of")
add_basic_personal_slot("gerund", "ger", person_number_list, "no special verb form of")
-- Generate the list of all slots.
alternant_multiword_spec.all_verb_slots = {}
for _, slot_and_accel in ipairs(alternant_multiword_spec.verb_slots_basic) do
table.insert(alternant_multiword_spec.all_verb_slots, slot_and_accel)
end
for _, slot_and_accel in ipairs(alternant_multiword_spec.verb_slots_reflexive_verb_form_of) do
table.insert(alternant_multiword_spec.all_verb_slots, slot_and_accel)
end
alternant_multiword_spec.verb_slots_basic_map = {}
for _, slotaccel in ipairs(alternant_multiword_spec.verb_slots_basic) do
local slot, accel = unpack(slotaccel)
alternant_multiword_spec.verb_slots_basic_map[slot] = accel
end
end
local overridable_stems = {}
local function allow_multiple_values(separated_groups, data)
local retvals = {}
for _, separated_group in ipairs(separated_groups) do
local footnotes = data.fetch_footnotes(separated_group)
local retval = {form = separated_group[1], footnotes = footnotes}
table.insert(retvals, retval)
end
return retvals
end
local function simple_choice(choices)
return function(separated_groups, data)
if #separated_groups > 1 then
data.parse_err("For spec '" .. data.prefix .. ":', only one value currently allowed")
end
if #separated_groups[1] > 1 then
data.parse_err("For spec '" .. data.prefix .. ":', no footnotes currently allowed")
end
local choice = separated_groups[1][1]
if not m_table.contains(choices, choice) then
data.parse_err("For spec '" .. data.prefix .. ":', saw value '" .. choice .. "' but expected one of '" ..
table.concat(choices, ",") .. "'")
end
return choice
end
end
for _, overridable_stem in ipairs {
"pres_unstressed",
"pres_stressed",
"pres1_and_sub",
-- Don't include pres1; use pres_1s if you need to override just that form
"impf",
"full_impf",
"pret_base",
"pret",
{"pret_conj", simple_choice({"irreg", "ar", "er", "ir"}) },
"fut",
"cond",
"pres_sub_stressed",
"pres_sub_unstressed",
{"sub_conj", simple_choice({"ar", "er"}) },
"plup",
"impf_sub",
"fut_sub",
"pers_inf",
"pp",
"short_pp",
} do
if type(overridable_stem) == "string" then
overridable_stems[overridable_stem] = allow_multiple_values
else
local stem, validator = unpack(overridable_stem)
overridable_stems[stem] = validator
end
end
-- Useful as the value of the `match` property of a built-in verb. `main_verb_spec` is a Lua pattern that should match
-- the non-prefixed part of a verb, and `prefix_specs` is a list of Lua patterns that should match the prefixed part of
-- a verb. If a prefix spec is preceded by ^, it must match exactly at the beginning of the verb; otherwise, additional
-- prefixes (e.g. re-, des-) may precede. Return the prefix and main verb.
local function match_against_verbs(main_verb_spec, prefix_specs)
return function(verb)
for _, prefix_spec in ipairs(prefix_specs) do
if prefix_spec:find("^%^") then
-- must match exactly
prefix_spec = prefix_spec:gsub("^%^", "")
if prefix_spec == "" then
-- We can't use the second branch of the if-else statement because an empty () returns the current position
-- in rmatch().
local main_verb = rmatch(verb, "^(" .. main_verb_spec .. ")$")
if main_verb then
return "", main_verb
end
else
local prefix, main_verb = rmatch(verb, "^(" .. prefix_spec .. ")(" .. main_verb_spec .. ")$")
if prefix then
return prefix, main_verb
end
end
else
local prefix, main_verb = rmatch(verb, "^(.*" .. prefix_spec .. ")(" .. main_verb_spec .. ")$")
if prefix then
return prefix, main_verb
end
end
end
return nil
end
end
--[=[
Built-in (usually irregular) conjugations.
Each entry is processed in turn and consists of an object with two fields:
1. match=: Specifies the built-in verbs that match this object.
2. forms=: Specifies the built-in stems and forms for these verbs.
The value of match= is either a string beginning with "^" (match only the specified verb), a string not beginning
with "^" (match any verb ending in that string), or a function that is passed in the verb and should return the prefix
of the verb if it matches, otherwise nil. The function match_against_verbs() is provided to facilitate matching a set
of verbs with a common ending and specific prefixes (e.g. [[ter]] and [[ater]] but not [[abater]], etc.).
The value of forms= is a table specifying stems and individual override forms. Each key of the table names either a
stem (e.g. `pres_stressed`), a stem property (e.g. `vowel_alt`) or an individual override form (e.g. `pres_1s`).
Each value of a stem can either be a string (a single stem), a list of strings, or a list of objects of the form
{form = STEM, footnotes = {FOONOTES}}. Each value of an individual override should be of exactly the same form except
that the strings specify full forms rather than stems. The values of a stem property depend on the specific property
but are generally strings or booleans.
In order to understand how the stem specifications work, it's important to understand the phonetic modifications done
by combine_stem_ending(). In general, the complexities of predictable prefix, stem and ending modifications are all
handled in this function. In particular:
1. Spelling-based modifications (c/z, g/gu, gu/gü, g/j) occur automatically as appropriate for the ending.
2. If the stem begins with an acute accent, the accent is moved onto the last vowel of the prefix (for handling verbs
in -uar such as [[minguar]], pres_3s 'míngua').
3. If the ending begins with a double asterisk, this is a signal to conditionally delete the accent on the last letter
of the stem. "Conditionally" means we don't do it if the last two letters would form a diphthong without the accent
on the second one (e.g. in [[sair]], with stem 'saí'); but as an exception, we do delete the accent in stems
ending in -guí, -quí (e.g. in [[conseguir]]) because in this case the ui isn't a diphthong.
4. If the ending begins with an asterisk, this is a signal to delete the accent on the last letter of the stem, e.g.
fizé -> fizermos. Unlike for **, this removal is unconditional, so we get e.g. 'sairmos' not #'saírmos'.
5. If ending begins with i, it must get an accent after an unstressed vowel (in some but not all cases) to prevent the
two merging into a diphthong. See combine_stem_ending() for specifics.
The following stems are recognized:
-- pres_unstressed: The present indicative unstressed stem (1p, 2p). Also controls the imperative 2p
and gerund. Defaults to the infinitive stem (minus the ending -ar/-er/-ir/-or).
-- pres_stressed: The present indicative stressed stem (1s, 2s, 3s, 3p). Also controls the imperative 2s.
Default is empty if indicator `no_pres_stressed`, else a vowel alternation if such an indicator is given
(e.g. `ue`, `ì`), else the infinitive stem.
-- pres1_and_sub: Overriding stem for 1s present indicative and the entire subjunctive. Only set by irregular verbs
and by the indicators `no_pres_stressed` (e.g. [[precaver]]) and `no_pres1_and_sub` (since verbs of this sort,
e.g. [[puir]], are missing the entire subjunctive as well as the 1s present indicative). Used by many irregular
verbs, e.g. [[caber]], verbs in '-air', [[dizer]], [[ter]], [[valer]], etc. Some verbs set this and then supply an
override for the pres_1sg if it's irregular, e.g. [[saber]], with irregular subjunctive stem "saib-" and special
1s present indicative "sei".
-- pres1: Special stem for 1s present indicative. Normally, do not set this explicitly. If you need to specify an
irregular 1s present indicative, use the form override pres_1s= to specify the entire form. Defaults to
pres1_and_sub if given, else pres_stressed.
-- pres_sub_unstressed: The present subjunctive unstressed stem (1p, 2p). Defaults to pres1_and_sub if given, else the
infinitive stem.
-- pres_sub_stressed: The present subjunctive stressed stem (1s, 2s, 3s, 1p). Defaults to pres1.
-- sub_conj: Determines the set of endings used in the subjunctive. Should be one of "ar" or "er".
-- impf: The imperfect stem (not including the -av-/-i- stem suffix, which is determined by the conjugation). Defaults
to the infinitive stem.
-- full_impf: The full imperfect stem missing only the endings (-a, -as, -am, etc.). Used for verbs with irregular
imperfects such as [[ser]], [[ter]], [[vir]] and [[pôr]]. Overrides must be supplied for the impf_1p and impf_2p
due to these forms having an accent on the stem.
-- pret_base: The preterite stem (not including the -a-/-e-/-i- stem suffix). Defaults to the infinitive stem.
-- pret: The full preterite stem missing only the endings (-ste, -mos, etc.). Used for verbs with irregular preterites
(pret_conj == "irreg") such as [[fazer]], [[poder]], [[trazer]], etc. Overrides must be supplied for the pret_1s
and pret_3s. Defaults to `pret_base` + the accented conjugation vowel.
-- pret_conj: Determines the set of endings used in the preterite. Should be one of "ar", "er", "ir" or "irreg".
Defaults to the conjugation as determined from the infinitive. When pret_conj == "irreg", stem `pret` is used,
otherwise `pret_base`.
-- fut: The future stem. Defaults to the infinitive stem + the unaccented conjugation vowel.
-- cond: The conditional stem. Defaults to `fut`.
-- impf_sub: The imperfect subjunctive stem. Defaults to `pret`.
-- fut_sub: The future subjunctive stem. Defaults to `pret`.
-- plup: The pluperfect stem. Defaults to `pret`.
-- pers_inf: The personal infinitive stem. Defaults to the infinitive stem + the accented conjugation vowel.
-- pp: The masculine singular past participle. Default is based on the verb conjugation: infinitive stem + "ado" for
-ar verbs, otherwise infinitive stem + "ido".
-- short_pp: The short masculine singular past participle, for verbs with such a form. No default.
-- pp_inv: True if the past participle exists only in the masculine singular.
]=]
local built_in_conjugations = {
--------------------------------------------------------------------------------------------
-- -ar --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- (1) Verbs with short past participles: need to specify the short pp explicitly.
--
-- aceitar: use <short_pp:aceito[Brazil],aceite[Portugal]>
-- anexar, completar, expressar, expulsar, findar, fritar, ganhar, gastar, limpar, pagar, pasmar, pegar, soltar:
-- use <short_pp:anexo> etc.
-- assentar: use <short_pp:assente>
-- entregar: use <short_pp:entregue>
-- enxugar: use <short_pp:enxuto>
-- matar: use <short_pp:morto>
--
-- (2) Verbs with orthographic consonant alternations: handled automatically.
--
-- -car (brincar, buscar, pecar, trancar, etc.): automatically handled in combine_stem_ending()
-- -çar (alcançar, começar, laçar): automatically handled in combine_stem_ending()
-- -gar (apagar, cegar, esmagar, largar, navegar, resmungar, sugar, etc.): automatically handled in combine_stem_ending()
--
-- (3) Verbs with vowel alternations: need to specify the alternation explicitly unless it always happens, in
-- which case it's handled automatically through an entry below.
--
-- esmiuçar changing to esmiúço: use <ú>
-- faiscar changing to faísco: use <í>
-- -iar changing to -eio (ansiar, incendiar, mediar, odiar, remediar, etc.): use <ei>
-- -izar changing to -ízo (ajuizar, enraizar, homogeneizar, plebeizar, etc.): use <í>
-- mobiliar changing to mobílio: use <í>
-- reusar changing to reúso: use <ú>
-- saudar changing to saúdo: use <ú>
-- tuitar/retuitar changing to (re)tuíto: use <í>
{
-- dar, desdar
match = match_against_verbs("dar", {"^", "^des", "^re"}),
forms = {
pres_1s = "dou",
pres_2s = "dás",
pres_3s = "dá",
-- damos, dais regular
pres_3p = "dão",
pret = "dé", pret_conj = "irreg", pret_1s = "dei", pret_3s = "deu",
pres_sub_1s = "dê",
pres_sub_2s = "dês",
pres_sub_3s = "dê",
pres_sub_1p = {"demos", "dêmos"},
-- deis regular
pres_sub_3p = {"deem", VAR_SUPERSEDED .. "dêem"},
irreg = true,
}
},
{
-- -ear (frear, nomear, semear, etc.)
match = "ear",
forms = {
pres_stressed = "ei",
e_ei_cat = true,
}
},
{
-- estar
match = match_against_verbs("estar", {"^", "sob", "sobr"}),
forms = {
pres_1s = "estou",
pres_2s = "estás",
pres_3s = "está",
-- FIXME, estámos is claimed as an alternative pres_1p in the old conjugation data, but I believe this is garbage
pres_3p = "estão",
pres1_and_sub = "estej", -- only for subjunctive as we override pres_1s
sub_conj = "er",
pret = "estivé", pret_conj = "irreg", pret_1s = "estive", pret_3s = "esteve",
-- [[sobestar]], [[sobrestar]] are transitive so they have fully inflected past participles
pp_inv = function(base, prefix) return prefix == "" end,
irreg = true,
}
},
{
-- It appears that only [[resfolegar]] has proparoxytone forms, not [[folegar]] or [[tresfolegar]].
match = "^resfolegar",
forms = {
pres_stressed = {"resfóleg", "resfoleg"},
irreg = true,
}
},
{
-- aguar/desaguar/enxaguar, ambiguar/apaziguar/averiguar, minguar, cheguar?? (obsolete variant of [[chegar]])
match = "guar",
forms = {
-- combine_stem_ending() will move the acute accent backwards so it sits after the last vowel in [[minguar]]
pres_stressed = {{form = AC .. "gu", footnotes = {"[Brazilian Portuguese]"}}, {form = "gu", footnotes = {"[European Portuguese]"}}},
pres_sub_stressed = {
{form = AC .. "gu", footnotes = {"[Brazilian Portuguese]"}},
{form = "gu", footnotes = {"[European Portuguese]"}},
{form = AC .. VAR_SUPERSEDED .. "gü", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "gú", footnotes = {"[European Portuguese]"}},
},
pres_sub_unstressed = {"gu", {form = VAR_SUPERSEDED .. "gü", footnotes = {"[Brazilian Portuguese]"}}},
pret_1s = {"guei", {form = VAR_SUPERSEDED .. "güei", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- adequar/readequar, antiquar/obliquar, apropinquar
match = "quar",
forms = {
-- combine_stem_ending() will move the acute accent backwards so it sits after the last vowel in [[apropinquar]]
pres_stressed = {{form = AC .. "qu", footnotes = {"[Brazilian Portuguese]"}}, {form = "qu", footnotes = {"[European Portuguese]"}}},
pres_sub_stressed = {
{form = AC .. "qu", footnotes = {"[Brazilian Portuguese]"}},
{form = "qu", footnotes = {"[European Portuguese]"}},
{form = AC .. VAR_SUPERSEDED .. "qü", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "qú", footnotes = {"[European Portuguese]"}},
},
pres_sub_unstressed = {"qu", {form = VAR_SUPERSEDED .. "qü", footnotes = {"[Brazilian Portuguese]"}}},
pret_1s = {"quei", {form = VAR_SUPERSEDED .. "qüei", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- -oar (abençoar, coroar, enjoar, perdoar, etc.)
match = "oar",
forms = {
pres_1s = {"oo", VAR_SUPERSEDED .. "ôo"},
}
},
{
-- -oiar (apoiar, boiar)
match = "oiar",
forms = {
pres_stressed = {"oi", {form = VAR_SUPERSEDED .. "ói", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- parar
match = "^parar",
forms = {
pres_3s = {"para", VAR_SUPERSEDED .. "pára"},
}
},
{
-- pelar
match = "^pelar",
forms = {
pres_1s = {"pelo", VAR_SUPERSEDED .. "pélo"},
pres_2s = {"pelas", VAR_SUPERSEDED .. "pélas"},
pres_3s = {"pela", VAR_SUPERSEDED .. "péla"},
}
},
--------------------------------------------------------------------------------------------
-- -er --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- precaver: use <no_pres_stressed>
-- -cer (verbs in -ecer, descer, vencer, etc.): automatically handled in combine_stem_ending()
-- -ger (proteger, reger, etc.): automatically handled in combine_stem_ending()
-- -guer (erguer/reerguer/soerguer): automatically handled in combine_stem_ending()
{
-- benzer
match = "benzer",
forms = {short_pp = "bento"}
},
{
-- caber
match = "caber",
forms = {
pres1_and_sub = "caib",
pret = "coubé", pret_1s = "coube", pret_3s = "coube", pret_conj = "irreg",
irreg = true,
}
},
{
-- crer, descrer
match = "crer",
forms = {
pres_2s = "crês", pres_3s = "crê",
pres_2p = "credes", pres_3p = {"creem", VAR_SUPERSEDED .. "crêem"},
pres1_and_sub = "crei",
irreg = true,
}
},
{
-- dizer, bendizer, condizer, contradizer, desdizer, maldizer, predizer, etc.
match = "dizer",
forms = {
-- use 'digu' because we're in a front environment; if we use 'dig', we'll get '#dijo'
pres1_and_sub = "digu", pres_3s = "diz",
pret = "dissé", pret_conj = "irreg", pret_1s = "disse", pret_3s = "disse", pp = "dito",
fut = "dir",
imp_2s = {"diz", "dize"}, -- per Infopédia
irreg = true,
}
},
{
-- eleger, reeleger
match = "eleger",
forms = {short_pp = "eleito"}
},
{
-- acender, prender; not desprender, etc.
match = match_against_verbs("ender", {"^ac", "^pr"}),
forms = {short_pp = "eso"}
},
{
-- fazer, afazer, contrafazer, desfazer, liquefazer, perfazer, putrefazer, rarefazer, refazer, satisfazer, tumefazer
match = "fazer",
forms = {
pres1_and_sub = "faç", pres_3s = "faz",
pret = "fizé", pret_conj = "irreg", pret_1s = "fiz", pret_3s = "fez", pp = "feito",
fut = "far",
imp_2s = {"faz", {form = "faze", footnotes = {"[Brazil only]"}}}, -- per Priberam
irreg = true,
}
},
{
match = "^haver",
forms = {
pres_1s = "hei",
pres_2s = "hás",
pres_3s = "há",
pres_1p = {"havemos", "hemos"},
pres_2p = {"haveis", "heis"},
pres_3p = "hão",
pres1_and_sub = "haj", -- only for subjunctive as we override pres_1s
pret = "houvé", pret_conj = "irreg", pret_1s = "houve", pret_3s = "houve",
imp_2p = "havei",
irreg = true,
}
},
-- reaver below under r-
{
-- jazer, adjazer
match = "jazer",
forms = {
pres_3s = "jaz",
imp_2s = {"jaz", "jaze"}, -- per Infopédia
irreg = true,
}
},
{
-- ler, reler, tresler; not excel(l)er, valer, etc.
match = match_against_verbs("ler", {"^", "^re", "tres"}),
forms = {
pres_2s = "lês", pres_3s = "lê",
pres_2p = "ledes", pres_3p = {"leem", VAR_SUPERSEDED .. "lêem"},
pres1_and_sub = "lei",
irreg = true,
}
},
{
-- morrer, desmorrer
match = "morrer",
forms = {short_pp = "morto"}
},
{
-- doer, moer/remoer, roer/corroer, soer
match = "oer",
forms = {
pres_1s = function(base, prefix)
return prefix ~= "s" and {"oo", VAR_SUPERSEDED .. "ôo"} or nil
end, pres_2s = "óis", pres_3s = "ói",
-- impf -ía etc., pret_1s -oí and pp -oído handled automatically in combine_stem_ending()
only3sp = function(base, prefix) return prefix == "d" end,
no_pres1_and_sub = function(base, prefix) return prefix == "s" end,
irreg = true,
}
},
{
-- perder
match = "perder",
forms = {
-- use 'perqu' because we're in a front environment; if we use 'perc', we'll get '#perço'
pres1_and_sub = "perqu",
irreg = true,
}
},
{
-- poder
match = "poder",
forms = {
pres1_and_sub = "poss",
pret = "pudé", pret_1s = "pude", pret_3s = "pôde", pret_conj = "irreg",
irreg = true,
}
},
{
-- prazer, aprazer, comprazer, desprazer
match = "prazer",
forms = {
pres_3s = "praz",
pret = "prouvé", pret_1s = "prouve", pret_3s = "prouve", pret_conj = "irreg",
only3sp = function(base, prefix) return not prefix:find("com$") end,
irreg = true,
}
},
-- prover below, just below ver
{
-- requerer; must precede querer
match = "requerer",
forms = {
-- old module claims alt pres_3s 'requere'; not in Priberam, Infopédia or conjugacao.com.br
pres_3s = "requer",
pres1_and_sub = "requeir",
imp_2s = {{form = "requere", footnotes = {"[Brazil only]"}}, "requer"}, -- per Priberam
-- regular preterite, unlike [[querer]]
irreg = true,
}
},
{
-- querer, desquerer, malquerer
match = "querer",
forms = {
-- old module claims alt pres_3s 'quere'; not in Priberam, Infopédia or conjugacao.com.br
pres_1s = "quero", pres_3s = "quer",
pres1_and_sub = "queir", -- only for subjunctive as we override pres_1s
pret = "quisé", pret_1s = "quis", pret_3s = "quis", pret_conj = "irreg",
imp_2s = {{form = "quere", footnotes = {"[Brazil only]"}}, {form = "quer", footnotes = {"[Brazil only]"}}}, -- per Priberam
irreg = true,
}
},
{
match = "reaver",
forms = {
no_pres_stressed = true,
pret = "reouvé", pret_conj = "irreg", pret_1s = "reouve", pret_3s = "reouve",
irreg = true,
}
},
{
-- saber, ressaber
match = "saber",
forms = {
pres_1s = "sei",
pres1_and_sub = "saib", -- only for subjunctive as we override pres_1s
pret = "soubé", pret_1s = "soube", pret_3s = "soube", pret_conj = "irreg",
irreg = true,
}
},
{
-- escrever/reescrever, circunscrever, descrever/redescrever, inscrever, prescrever, proscrever, subscrever,
-- transcrever, others?
match = "screver",
forms = {
pp = "scrito",
irreg = true,
}
},
{
-- suspender
match = "suspender",
forms = {short_pp = "suspenso"}
},
{
match = "^ser",
forms = {
pres_1s = "sou", pres_2s = "és", pres_3s = "é",
pres_1p = "somos", pres_2p = "sois", pres_3p = "são",
pres1_and_sub = "sej", -- only for subjunctive as we override pres_1s
full_impf = "er", impf_1p = "éramos", impf_2p = "éreis",
pret = "fô", pret_1s = "fui", pret_3s = "foi", pret_conj = "irreg",
imp_2s = "sê", imp_2p = "sede",
pp_inv = true,
irreg = true,
}
},
{
-- We want to match abster, conter, deter, etc. but not abater, cometer, etc. No way to avoid listing each verb.
match = match_against_verbs("ter", {"abs", "^a", "con", "de", "entre", "man", "ob", "^re", "sus", "^"}),
forms = {
pres_2s = function(base, prefix) return prefix == "" and "tens" or "téns" end,
pres_3s = function(base, prefix) return prefix == "" and "tem" or "tém" end,
pres_2p = "tendes", pres_3p = "têm",
pres1_and_sub = "tenh",
full_impf = "tinh", impf_1p = "tínhamos", impf_2p = "tínheis",
pret = "tivé", pret_1s = "tive", pret_3s = "teve", pret_conj = "irreg",
irreg = true,
}
},
{
match = "trazer",
forms = {
-- use 'tragu' because we're in a front environment; if we use 'trag', we'll get '#trajo'
pres1_and_sub = "tragu", pres_3s = "traz",
pret = "trouxé", pret_1s = "trouxe", pret_3s = "trouxe", pret_conj = "irreg",
fut = "trar",
irreg = true,
}
},
{
-- valer, desvaler, equivaler
match = "valer",
forms = {
pres1_and_sub = "valh",
irreg = true,
}
},
{
-- coerir, incoerir
--FIXME: This should be a part of the <i-e> section. It's an "i-e", but with accents to prevent a diphthong when it gets stressed.
match = "coerir",
forms = {
vowel_alt = "i-e",
pres1_and_sub = "coír",
pres_sub_unstressed = "coir",
}
},
{
-- We want to match antever etc. but not absolver, atrever etc. No way to avoid listing each verb.
match = match_against_verbs("ver", {"ante", "entre", "pre", "^re", "^"}),
forms = {
pres_2s = "vês", pres_3s = "vê",
pres_2p = "vedes", pres_3p = {"veem", VAR_SUPERSEDED .. "vêem"},
pres1_and_sub = "vej",
pret = "ví", pret_1s = "vi", pret_3s = "viu", pret_conj = "irreg",
pp = "visto",
irreg = true,
}
},
{
-- [[prover]] and [[desprover]] have regular preterite and past participle
match = "prover",
forms = {
pres_2s = "provês", pres_3s = "provê",
pres_2p = "provedes", pres_3p = {"proveem", VAR_SUPERSEDED .. "provêem"},
pres1_and_sub = "provej",
irreg = true,
}
},
{
-- Only envolver, revolver. Not volver, desenvolver, devolver, evolver, etc.
match = match_against_verbs("volver", {"^en", "^re"}),
forms = {short_pp = "volto"},
},
--------------------------------------------------------------------------------------------
-- -ir --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- abolir: per Priberam: <no_pres1_and_sub> for Brazil, use <u-o> for Portugal
-- barrir: use <only3sp>
-- carpir, colorir, demolir: use <no_pres1_and_sub>
-- descolorir: per Priberam: <no_pres_stressed> for Brazil, use <no_pres1_and_sub> for Portugal
-- delir, espavorir, falir, florir, remir, renhir: use <no_pres_stressed>
-- empedernir: per Priberam: <no_pres_stressed> for Brazil, use <i-e> for Portugal
-- transir: per Priberam: <no_pres_stressed> for Brazil, regular for Portugal
-- aspergir, despir, flectir/deflectir/genuflectir/genufletir/reflectir/refletir, mentir/desmentir,
-- sentir/assentir/consentir/dissentir/pressentir/ressentir, convergir/divergir, aderir/adherir,
-- ferir/auferir/conferir/deferir/desferir/diferir/differir/inferir/interferir/preferir/proferir/referir/transferir,
-- gerir/digerir/ingerir/sugerir, preterir, competir/repetir, servir, advertir/animadvertir/divertir,
-- vestir/investir/revestir/travestir, seguir/conseguir/desconseguir/desseguir/perseguir/prosseguir: use <i-e>
-- inerir: use <i-e> (per Infopédia, and per Priberam for Brazil), use <i-e.only3sp> (per Priberam for Portugal)
-- compelir/expelir/impelir/repelir: per Priberam: use <i-e> for Brazil, <no_pres1_and_sub> for Portugal (Infopédia
-- says <i-e>); NOTE: old module claims short_pp 'repulso' but none of Priberam, Infopédia and conjugacao.com.br agree
-- dormir, engolir, tossir, subir, acudir/sacudir, fugir, sumir/consumir (NOT assumir/presumir/resumir): use <u-o>
-- polir/repolir (claimed in old module to have no pres stressed, but Priberam disagrees for both Brazil and
-- Portugal; Infopédia lists repolir as completely regular and not like polir, but I think that's an error): use
-- <u>
-- premir: per Priberam: use <no_pres1_and_sub> for Brazil, <i> for Portugal (for Portugal, Priberam says
-- primo/primes/prime, while Infopédia says primo/premes/preme; Priberam is probably more reliable)
-- extorquir/retorquir use <no_pres1_and_sub> for Brazil, <u-o,u> for Portugal
-- agredir/progredir/regredir/transgredir: use <i>
-- denegrir, prevenir: use <i>
-- eclodir: per Priberam: regular in Brazil, <u-o.only3sp> in Portugal (Infopédia says regular)
-- cerzir: per Priberam: use <i> for Brazil, use <i-e> for Portugal (Infopédia says <i-e,i>)
-- cergir: per Priberam: use <i-e> for Brazil, no conjugation given for Portugal (Infopédia says <i-e>)
-- proibir/coibir: use <í>
-- reunir: use <ú>
-- parir/malparir: use <no_pres_stressed> (old module had pres_1s = {paro (1_defective), pairo (1_obsolete_alt)},
-- pres_2s = pares, pres_3s = pare, and subjunctive stem par- or pair-, but both Priberam and Infopédia agree
-- in these verbs being no_pres_stressed)
-- explodir/implodir: use <u-o> (claimed in old module to be <+,u-o> but neither Priberam nor Infopédia agree)
--
-- -cir alternations (aducir, ressarcir): automatically handled in combine_stem_ending()
-- -gir alternations (agir, dirigir, exigir): automatically handled in combine_stem_ending()
-- -guir alternations (e.g. conseguir): automatically handled in combine_stem_ending()
-- -quir alternations (e.g. extorquir): automatically handled in combine_stem_ending()
{
-- verbs in -air (cair, sair, trair and derivatives: decair/descair/recair, sobres(s)air,
-- abstrair/atrair/contrair/distrair/extrair/protrair/retrair/subtrair)
match = "air",
forms = {
pres1_and_sub = "ai", pres_2s = "ais", pres_3s = "ai",
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- abrir/desabrir/reabrir
match = "abrir",
forms = {pp = "aberto"}
},
{
-- cobrir/descobrir/encobrir/recobrir/redescobrir
match = "cobrir",
forms = {vowel_alt = "u-o", pp = "coberto"}
},
{
-- conduzir, produzir, reduzir, traduzir, etc.; luzir, reluzir, tremeluzir
match = "uzir",
forms = {
pres_3s = "uz",
imp_2s = {"uz", "uze"}, -- per Infopédia
irreg = true,
}
},
{
-- pedir, desimpedir, despedir, espedir, expedir, impedir
-- medir
-- comedir (per Priberam, no_pres_stressed in Brazil)
match = match_against_verbs("edir", {"m", "p"}),
forms = {
pres1_and_sub = "eç",
irreg = true,
}
},
{
-- frigir
match = "frigir",
forms = {vowel_alt = "i-e", short_pp = "frito"},
},
{
-- inserir
match = "inserir",
forms = {vowel_alt = "i-e", short_pp = {form = "inserto", footnotes = {"[European Portuguese only]"}}},
},
{
-- ir
match = "^ir",
forms = {
pres_1s = "vou", pres_2s = "vais", pres_3s = "vai",
pres_1p = "vamos", pres_2p = "ides", pres_3p = "vão",
pres_sub_1s = "vá", pres_sub_2s = "vás", pres_sub_3s = "vá",
pres_sub_1p = "vamos", pres_sub_2p = "vades", pres_sub_3p = "vão",
pret = "fô", pret_1s = "fui", pret_3s = "foi", pret_conj = "irreg",
irreg = true,
}
},
{
-- emergir, imergir, submergir
match = "mergir",
forms = {vowel_alt = {"i-e", "+"}, short_pp = "merso"},
},
{
match = "ouvir",
forms = {
pres1_and_sub = {"ouç", "oiç"},
irreg = true,
}
},
{
-- exprimir, imprimir, comprimir (but not descomprimir per Priberam), deprimir, oprimir/opprimir (but not reprimir,
-- suprimir/supprimir per Priberam)
match = match_against_verbs("primir", {"^com", "ex", "im", "de", "^o", "op"}),
forms = {short_pp = "presso"}
},
{
-- rir, sorrir
match = match_against_verbs("rir", {"^", "sor"}),
forms = {
pres_2s = "ris", pres_3s = "ri", pres_2p = "rides", pres_3p = "riem",
pres1_and_sub = "ri",
irreg = true,
}
},
{
-- distinguir, extinguir
match = "tinguir",
forms = {
short_pp = "tinto",
-- gu/g alternations handled in combine_stem_ending()
}
},
{
-- delinquir, arguir/redarguir
-- NOTE: The following is based on delinquir, with arguir/redarguir by parallelism.
-- In Priberam, delinquir and arguir are exactly parallel, but in Infopédia they aren't; only delinquir has
-- alternatives like 'delínques'. I assume this is because forms like 'delínques' are Brazilian and
-- Infopédia is from Portugal, so their coverage of Brazilian forms may be inconsistent.
match = match_against_verbs("uir", {"delinq", "arg"}),
forms = {
-- use 'ü' because we're in a front environment; if we use 'u', we'll get '#delinco', '#argo'
pres1_and_sub = {{form = AC .. "ü", footnotes = {"[Brazilian Portuguese]"}}, {form = "ü", footnotes = {"[European Portuguese]"}}},
-- FIXME: verify. This is by partial parallelism with the present subjunctive of verbs in -quar (also a
-- front environment). Infopédia has 'delinquis ou delínques' and Priberam has 'delinqúis'.
pres_2s = {
{form = AC .. "ues", footnotes = {"[Brazilian Portuguese]"}},
{form = "uis", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "ües", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úis", footnotes = {"[European Portuguese]"}},
},
-- Same as previous.
pres_3s = {
{form = AC .. "ue", footnotes = {"[Brazilian Portuguese]"}},
{form = "ui", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "üe", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úi", footnotes = {"[European Portuguese]"}},
},
-- Infopédia has 'delinquem ou delínquem' and Priberam has 'delinqúem'.
pres_3p = {
{form = AC .. "uem", footnotes = {"[Brazilian Portuguese]"}},
{form = "uem", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "üem", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úem", footnotes = {"[European Portuguese]"}},
},
-- FIXME: The old module also had several other alternative forms (given as [123]_alt, not identified as
-- obsolete):
-- impf: delinquia/delinquía, delinquias/delinquías, delinquia/delinquía, delinquíamos, delinquíeis, delinquiam/delinquíam
-- plup: delinquira/delinquíra, delinquiras/delinquíras, delinquira/delinquíra, delinquíramos, delinquíreis, delinquiram/delinquíram
-- pres_1p = delinquimos/delinquímos, pres_2p = delinquis/delinquís
-- pret = delinqui/delinquí, delinquiste/delinquíste, delinquiu, delinquimos/delinquímos, delinquistes/delinquístes, delinquiram/delinquíram
-- pers_inf = delinquir, delinquires, delinquir, delinquirmos, delinquirdes, delinquirem/delinquírem
-- fut_sub = delinquir, delinquires, delinquir, delinquirmos, delinquirdes, delinquirem/delinquírem
--
-- None of these alternative forms can be found in the Infopédia, Priberam, Collins or Reverso conjugation
-- tables, so their status is unclear, and I have omitted them.
}
},
{
-- verbs in -truir (construir, destruir, reconstruir) but not obstruir/desobstruir, instruir, which are handled
-- by the default -uir handler below
match = match_against_verbs("struir", {"con", "de"}),
forms = {
pres_2s = {"stróis", "struis"}, pres_3s = {"strói", "strui"}, pres_3p = {"stroem", "struem"},
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- verbs in -cluir (concluir, excluir, incluir): like -uir but has short_pp concluso etc. in Brazil
match = "cluir",
forms = {
pres_2s = "cluis", pres_3s = "clui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
short_pp = {form = "cluso", footnotes = {"[Brazil only]"}},
irreg = true,
}
},
{
-- puir, ruir: like -uir but defective in pres_1s, all pres sub
match = match_against_verbs("uir", {"^p", "^r"}),
forms = {
pres_2s = "uis", pres_3s = "ui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
no_pres1_and_sub = true,
irreg = true,
}
},
{
-- remaining verbs in -uir (concluir/excluir/incluir/concruir/concruyr, abluir/diluir, afluir/fluir/influir,
-- aluir, anuir, atribuir/contribuir/distribuir/redistribuir/retribuir/substituir, coevoluir/evoluir,
-- constituir/destituir/instituir/reconstituir/restituir, derruir, diminuir, estatuir, fruir/usufruir, imbuir,
-- imiscuir, poluir, possuir, pruir
-- FIXME: old module lists short pp incluso for incluir that can't be verified, ask about this
-- FIXME: handle -uyr verbs?
match = function(verb)
-- Don't match -guir verbs (e.g. [[seguir]], [[conseguir]]) or -quir verbs (e.g. [[extorquir]])
if verb:find("guir$") or verb:find("quir$") then
return nil
else
return match_against_verbs("uir", {""})(verb)
end
end,
forms = {
pres_2s = "uis", pres_3s = "ui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- We want to match advir, convir, devir, etc. but not ouvir, servir, etc. No way to avoid listing each verb.
match = match_against_verbs("vir", {"ad", "^a", "con", "contra", "de", "^desa", "inter", "pro", "^re", "sobre", "^"}),
forms = {
pres_2s = function(base, prefix) return prefix == "" and "vens" or "véns" end,
pres_3s = function(base, prefix) return prefix == "" and "vem" or "vém" end,
pres_2p = "vindes", pres_3p = "vêm",
pres1_and_sub = "venh",
full_impf = "vinh", impf_1p = "vínhamos", impf_2p = "vínheis",
pret = "vié", pret_1s = "vim", pret_3s = "veio", pret_conj = "irreg",
pp = "vindo",
irreg = true,
}
},
--------------------------------------------------------------------------------------------
-- misc --
--------------------------------------------------------------------------------------------
{
-- pôr, antepor, apor, compor/decompor/descompor, contrapor, depor, dispor, expor, impor, interpor, justapor,
-- opor, pospor, propor, repor, sobrepor, supor/pressupor, transpor, superseded forms like [[decompôr]], others?
match = "p[oô]r",
forms = {
pres1_and_sub = "ponh",
pres_2s = "pões", pres_3s = "põe", pres_1p = "pomos", pres_2p = "pondes", pres_3p = "põem",
full_impf = "punh", impf_1p = "púnhamos", impf_2p = "púnheis",
pret = "pusé", pret_1s = "pus", pret_3s = "pôs", pret_conj = "irreg",
pers_inf = "po",
gerund = "pondo", pp = "posto",
irreg = true,
}
},
}
local function skip_slot(base, slot, allow_overrides)
if not allow_overrides and (base.basic_overrides[slot] or
base.refl and base.basic_reflexive_only_overrides[slot]) then
-- Skip any slots for which there are overrides.
return true
end
if base.only3s and (slot:find("^pp_f") or slot:find("^pp_mp")) then
-- diluviar, atardecer, neviscar; impersonal verbs have only masc sing pp
return true
end
if not slot:find("[123]") then
-- Don't skip non-personal slots.
return false
end
if base.nofinite then
return true
end
if (base.only3s or base.only3sp or base.only3p) and (slot:find("^imp_") or slot:find("^neg_imp_")) then
return true
end
if base.only3s and not slot:find("3s") then
-- diluviar, atardecer, neviscar
return true
end
if base.only3sp and not slot:find("3[sp]") then
-- atañer, concernir
return true
end
if base.only3p and not slot:find("3p") then
-- [[caer cuatro gotas]], [[caer chuzos de punta]], [[entrarle los siete males]]
return true
end
return false
end
-- Apply vowel alternations to stem.
local function apply_vowel_alternations(stem, alternations)
local alternation_stems = {}
local saw_pres1_and_sub = false
local saw_pres_stressed = false
-- Process alternations other than +.
for _, altobj in ipairs(alternations) do
local alt = altobj.form
local pres1_and_sub, pres_stressed, err
-- Treat final -gu, -qu as a consonant, so the previous vowel can alternate (e.g. conseguir -> consigo).
-- This means a verb in -guar can't have a u-ú alternation but I don't think there are any verbs like that.
stem = rsub(stem, "([gq])u$", "%1" .. TEMPC1)
if alt == "+" then
-- do nothing yet
elseif alt == "ei" then
local before_last_vowel = rmatch(stem, "^(.*)i$")
if not before_last_vowel then
err = "stem should end in -i"
else
pres1_and_sub = nil
pres_stressed = before_last_vowel .. "ei"
end
else
local before_last_vowel, last_vowel, after_last_vowel = rmatch(stem, "^(.*)(" .. V .. ")(.-[ui])$")
if not before_last_vowel then
before_last_vowel, last_vowel, after_last_vowel = rmatch(stem, "^(.*)(" .. V .. ")(.-)$")
end
if alt == "i-e" then
if last_vowel == "e" or last_vowel == "i" then
pres1_and_sub = before_last_vowel .. "i" .. after_last_vowel
if last_vowel == "i" then
pres_stressed = before_last_vowel .. "e" .. after_last_vowel
end
else
err = "should have -e- or -i- as the last vowel"
end
elseif alt == "i" then
if last_vowel == "e" then
pres1_and_sub = before_last_vowel .. "i" .. after_last_vowel
pres_stressed = pres1_and_sub
else
err = "should have -e- as the last vowel"
end
elseif alt == "u-o" then
if last_vowel == "o" or last_vowel == "u" then
pres1_and_sub = before_last_vowel .. "u" .. after_last_vowel
if last_vowel == "u" then
pres_stressed = before_last_vowel .. "o" .. after_last_vowel
end
else
err = "should have -o- or -u- as the last vowel"
end
elseif alt == "u" then
if last_vowel == "o" then
pres1_and_sub = before_last_vowel .. "u" .. after_last_vowel
pres_stressed = pres1_and_sub
else
err = "should have -o- as the last vowel"
end
elseif alt == "í" then
if last_vowel == "i" then
pres_stressed = before_last_vowel .. "í" .. after_last_vowel
else
err = "should have -i- as the last vowel"
end
elseif alt == "ú" then
if last_vowel == "u" then
pres_stressed = before_last_vowel .. "ú" .. after_last_vowel
else
err = "should have -u- as the last vowel"
end
else
error("Internal error: Unrecognized vowel alternation '" .. alt .. "'")
end
end
if pres1_and_sub then
pres1_and_sub = {form = pres1_and_sub:gsub(TEMPC1, "u"), footnotes = altobj.footnotes}
saw_pres1_and_sub = true
end
if pres_stressed then
pres_stressed = {form = pres_stressed:gsub(TEMPC1, "u"), footnotes = altobj.footnotes}
saw_pres_stressed = true
end
table.insert(alternation_stems, {
altobj = altobj,
pres1_and_sub = pres1_and_sub,
pres_stressed = pres_stressed,
err = err
})
end
-- Now do +. We check to see which stems are used by other alternations and specify those so any footnotes are
-- properly attached.
for _, alternation_stem in ipairs(alternation_stems) do
if alternation_stem.altobj.form == "+" then
local stemobj = {form = stem, footnotes = alternation_stem.altobj.footnotes}
alternation_stem.pres1_and_sub = saw_pres1_and_sub and stemobj or nil
alternation_stem.pres_stressed = saw_pres_stressed and stemobj or nil
end
end
return alternation_stems
end
-- Add the `stem` to the `ending` for the given `slot` and apply any phonetic modifications.
-- WARNING: This function is written very carefully; changes to it can easily have unintended consequences.
local function combine_stem_ending(base, slot, prefix, stem, ending, dont_include_prefix)
-- If the stem begins with an acute accent, this is a signal to move the accent onto the last vowel of the prefix.
-- Cf. míngua of minguar.
if stem:find("^" .. AC) then
stem = rsub(stem, "^" .. AC, "")
if dont_include_prefix then
error("Internal error: Can't handle acute accent at beginning of stem if dont_include_prefix is given")
end
prefix = rsub(prefix, "([aeiouyAEIOUY])([^aeiouyAEIOUY]*)$", "%1" .. AC .. "%2")
end
-- Use the full stem for checking for -gui ending and such, because 'stem' is just 'u' for [[arguir]],
-- [[delinquir]].
local full_stem = prefix .. stem
-- Include the prefix in the stem unless dont_include_prefix is given (used for the past participle stem).
if not dont_include_prefix then
stem = prefix .. stem
end
-- If the ending begins with a double asterisk, this is a signal to conditionally delete the accent on the last letter
-- of the stem. "Conditionally" means we don't do it if the last two letters would form a diphthong without the accent
-- on the second one (e.g. in [[sair]], with stem 'saí'); but as an exception, we do delete the accent in stems
-- ending in -guí, -quí (e.g. in [[conseguir]]) because in this case the ui isn't a diphthong.
if ending:find("^%*%*") then
ending = rsub(ending, "^%*%*", "")
if rfind(full_stem, "[gq]uí$") or not rfind(full_stem, V .. "[íú]$") then
stem = remove_final_accent(stem)
end
end
-- If the ending begins with an asterisk, this is a signal to delete the accent on the last letter of the stem.
-- E.g. fizé -> fizermos. Unlike for **, this removal is unconditional, so we get e.g. 'sairmos' not #'saírmos'.
if ending:find("^%*") then
ending = rsub(ending, "^%*", "")
stem = remove_final_accent(stem)
end
-- If ending begins with i, it must get an accent after an unstressed vowel (in some but not all cases) to prevent
-- the two merging into a diphthong:
-- * cair ->
-- * pres: caímos, caís;
-- * impf: all forms (caí-);
-- * pret: caí, caíste (but not caiu), caímos, caístes, caíram;
-- * plup: all forms (caír-);
-- * impf_sub: all forms (caíss-);
-- * fut_sub: caíres, caírem (but not cair, cairmos, cairdes)
-- * pp: caído (but not gerund caindo)
-- * atribuir, other verbs in -uir -> same pattern as for cair etc.
-- * roer ->
-- * pret: roí
-- * impf: all forms (roí-)
-- * pp: roído
if ending:find("^i") and full_stem:find("[aeiou]$") and not full_stem:find("[gq]u$") and ending ~= "ir" and
ending ~= "iu" and ending ~= "indo" and not ending:find("^ir[md]") then
ending = ending:gsub("^i", "í")
end
-- Spelling changes in the stem; it depends on whether the stem given is the pre-front-vowel or
-- pre-back-vowel variant, as indicated by `frontback`. We want these front-back spelling changes to happen
-- between stem and ending, not between prefix and stem; the prefix may not have the same "front/backness"
-- as the stem.
local is_front = rfind(ending, "^[eiéíê]")
if base.frontback == "front" and not is_front then
stem = stem:gsub("c$", "ç") -- conhecer -> conheço, vencer -> venço, descer -> desço
stem = stem:gsub("g$", "j") -- proteger -> protejo, fugir -> fujo
stem = stem:gsub("gu$", "g") -- distinguir -> distingo, conseguir -> consigo
stem = stem:gsub("qu$", "c") -- extorquir -> exturco
stem = stem:gsub("([gq])ü$", "%1u") -- argüir (superseded) -> arguo, delinqüir (superseded) -> delinquo
elseif base.frontback == "back" and is_front then
-- The following changes are all superseded so we don't do them:
-- averiguar -> averigüei, minguar -> mingüei; antiquar -> antiqüei, apropinquar -> apropinqüei
-- stem = stem:gsub("([gq])u$", "%1ü")
stem = stem:gsub("g$", "gu") -- cargar -> carguei, apagar -> apaguei
stem = stem:gsub("c$", "qu") -- marcar -> marquei
stem = stem:gsub("ç$", "c") -- começar -> comecei
-- j does not go to g here; desejar -> deseje not #desege
end
return stem .. ending
end
local function add3(base, slot, stems, endings, footnotes, allow_overrides)
if skip_slot(base, slot, allow_overrides) then
return
end
local function do_combine_stem_ending(stem, ending)
return combine_stem_ending(base, slot, base.prefix, stem, ending)
end
iut.add_forms(base.forms, slot, stems, endings, do_combine_stem_ending, nil, nil, footnotes)
end
local function insert_form(base, slot, form)
if not skip_slot(base, slot) then
iut.insert_form(base.forms, slot, form)
end
end
local function insert_forms(base, slot, forms)
if not skip_slot(base, slot) then
iut.insert_forms(base.forms, slot, forms)
end
end
local function add_single_stem_tense(base, slot_pref, stems, s1, s2, s3, p1, p2, p3)
local function addit(slot, ending)
add3(base, slot_pref .. "_" .. slot, stems, ending)
end
addit("1s", s1)
addit("2s", s2)
addit("3s", s3)
addit("1p", p1)
addit("2p", p2)
addit("3p", p3)
end
local function construct_stems(base, vowel_alt)
local stems = {}
stems.pres_unstressed = base.stems.pres_unstressed or base.inf_stem
stems.pres_stressed =
-- If no_pres_stressed given, pres_stressed stem should be empty so no forms are generated.
base.no_pres_stressed and {} or
base.stems.pres_stressed or
vowel_alt.pres_stressed or
base.inf_stem
stems.pres1_and_sub =
-- If no_pres_stressed given, the entire subjunctive is missing.
base.no_pres_stressed and {} or
-- If no_pres1_and_sub given, pres1 and entire subjunctive are missing.
base.no_pres1_and_sub and {} or
base.stems.pres1_and_sub or
vowel_alt.pres1_and_sub or
nil
stems.pres1 = base.stems.pres1 or stems.pres1_and_sub or stems.pres_stressed
stems.impf = base.stems.impf or base.inf_stem
stems.full_impf = base.stems.full_impf
stems.pret_base = base.stems.pret_base or base.inf_stem
stems.pret = base.stems.pret or iut.map_forms(iut.convert_to_general_list_form(stems.pret_base), function(form)
return form .. base.conj_vowel end)
stems.pret_conj = base.stems.pret_conj or base.conj
stems.fut = base.stems.fut or base.inf_stem .. base.conj
stems.cond = base.stems.cond or stems.fut
stems.pres_sub_stressed = base.stems.pres_sub_stressed or stems.pres1
stems.pres_sub_unstressed = base.stems.pres_sub_unstressed or stems.pres1_and_sub or stems.pres_unstressed
stems.sub_conj = base.stems.sub_conj or base.conj
stems.plup = base.stems.plup or stems.pret
stems.impf_sub = base.stems.impf_sub or stems.pret
stems.fut_sub = base.stems.fut_sub or stems.pret
stems.pers_inf = base.stems.pers_inf or base.inf_stem .. base.conj_vowel
stems.pp = base.stems.pp or base.conj == "ar" and
combine_stem_ending(base, "pp_ms", base.prefix, base.inf_stem, "ado", "dont include prefix") or
-- use combine_stem_ending esp. so we get roído, caído, etc.
combine_stem_ending(base, "pp_ms", base.prefix, base.inf_stem, "ido", "dont include prefix")
stems.pp_ms = stems.pp
local function masc_to_fem(form)
if rfind(form, "o$") then
return rsub(form, "o$", "a")
else
return form
end
end
stems.pp_fs = iut.map_forms(iut.convert_to_general_list_form(stems.pp_ms), masc_to_fem)
if base.stems.short_pp then
stems.short_pp_ms = base.stems.short_pp
stems.short_pp_fs = iut.map_forms(iut.convert_to_general_list_form(stems.short_pp_ms), masc_to_fem)
end
base.this_stems = stems
end
local function add_present_indic(base)
local stems = base.this_stems
local function addit(slot, stems, ending)
add3(base, "pres_" .. slot, stems, ending)
end
local s2, s3, p1, p2, p3
if base.conj == "ar" then
s2, s3, p1, p2, p3 = "as", "a", "amos", "ais", "am"
elseif base.conj == "er" or base.conj == "or" then -- verbs in -por have the present overridden
s2, s3, p1, p2, p3 = "es", "e", "emos", "eis", "em"
elseif base.conj == "ir" then
s2, s3, p1, p2, p3 = "es", "e", "imos", "is", "em"
else
error("Internal error: Unrecognized conjugation " .. base.conj)
end
addit("1s", stems.pres1, "o")
addit("2s", stems.pres_stressed, s2)
addit("3s", stems.pres_stressed, s3)
addit("1p", stems.pres_unstressed, p1)
addit("2p", stems.pres_unstressed, p2)
addit("3p", stems.pres_stressed, p3)
end
local function add_present_subj(base)
local stems = base.this_stems
local function addit(slot, stems, ending)
add3(base, "pres_sub_" .. slot, stems, ending)
end
local s1, s2, s3, p1, p2, p3
if stems.sub_conj == "ar" then
s1, s2, s3, p1, p2, p3 = "e", "es", "e", "emos", "eis", "em"
else
s1, s2, s3, p1, p2, p3 = "a", "as", "a", "amos", "ais", "am"
end
addit("1s", stems.pres_sub_stressed, s1)
addit("2s", stems.pres_sub_stressed, s2)
addit("3s", stems.pres_sub_stressed, s3)
addit("1p", stems.pres_sub_unstressed, p1)
addit("2p", stems.pres_sub_unstressed, p2)
addit("3p", stems.pres_sub_stressed, p3)
end
local function add_finite_non_present(base)
local stems = base.this_stems
local function add_tense(slot, stem, s1, s2, s3, p1, p2, p3)
add_single_stem_tense(base, slot, stem, s1, s2, s3, p1, p2, p3)
end
if stems.full_impf then
-- An override needs to be supplied for the impf_1p and impf_2p due to the written accent on the stem.
add_tense("impf", stems.full_impf, "a", "as", "a", {}, {}, "am")
elseif base.conj == "ar" then
add_tense("impf", stems.impf, "ava", "avas", "ava", "ávamos", "áveis", "avam")
else
add_tense("impf", stems.impf, "ia", "ias", "ia", "íamos", "íeis", "iam")
end
-- * at the beginning of the ending means to remove a final accent from the preterite stem.
if stems.pret_conj == "irreg" then
add_tense("pret", stems.pret, {}, "*ste", {}, "*mos", "*stes", "*ram")
elseif stems.pret_conj == "ar" then
add_tense("pret", stems.pret_base, "ei", "aste", "ou",
{{form = VAR_BR .. "amos", footnotes = {"[Brazilian Portuguese]"}}, {form = VAR_PT .. "ámos", footnotes = {"[European Portuguese]"}}}, "astes", "aram")
elseif stems.pret_conj == "er" then
add_tense("pret", stems.pret_base, "i", "este", "eu", "emos", "estes", "eram")
else
add_tense("pret", stems.pret_base, "i", "iste", "iu", "imos", "istes", "iram")
end
-- * at the beginning of the ending means to remove a final accent from the stem.
-- ** is similar but is "conditional" on a consonant preceding the final vowel.
add_tense("plup", stems.plup, "**ra", "**ras", "**ra", "ramos", "reis", "**ram")
add_tense("impf_sub", stems.impf_sub, "**sse", "**sses", "**sse", "ssemos", "sseis", "**ssem")
add_tense("fut_sub", stems.fut_sub, "*r", "**res", "*r", "*rmos", "*rdes", "**rem")
local mark = TEMP_MESOCLITIC_INSERTION_POINT
add_tense("fut", stems.fut, mark .. "ei", mark .. "ás", mark .. "á", mark .. "emos", mark .. "eis", mark .. "ão")
add_tense("cond", stems.cond, mark .. "ia", mark .. "ias", mark .. "ia", mark .. "íamos", mark .. "íeis", mark .. "iam")
-- Different stems for different parts of the personal infinitive to correctly handle forms of [[sair]] and [[pôr]].
add_tense("pers_inf", base.non_prefixed_verb, "", {}, "", {}, {}, {})
add_tense("pers_inf", stems.pers_inf, {}, "**res", {}, "*rmos", "*rdes", "**rem")
end
local function add_non_finite_forms(base)
local stems = base.this_stems
local function addit(slot, stems, ending, footnotes)
add3(base, slot, stems, ending, footnotes)
end
insert_form(base, "infinitive", {form = base.verb})
-- Also insert "infinitive + reflexive pronoun" combinations if we're handling a reflexive verb. See comment below for
-- "gerund + reflexive pronoun" combinations.
if base.refl then
for _, persnum in ipairs(person_number_list) do
insert_form(base, "infinitive_" .. persnum, {form = base.verb})
end
end
-- verbs in -por have the gerund overridden
local ger_ending = base.conj == "ar" and "ando" or base.conj == "er" and "endo" or "indo"
addit("gerund", stems.pres_unstressed, ger_ending)
-- Also insert "gerund + reflexive pronoun" combinations if we're handling a reflexive verb. We insert exactly the same
-- form as for the bare gerund; later on in add_reflexive_or_fixed_clitic_to_forms(), we add the appropriate clitic
-- pronouns. It's important not to do this for non-reflexive verbs, because in that case, the clitic pronouns won't be
-- added, and {{pt-verb form of}} will wrongly consider all these combinations as possible inflections of the bare
-- gerund. Thanks to [[User:JeffDoozan]] for this bug fix.
if base.refl then
for _, persnum in ipairs(person_number_list) do
addit("gerund_" .. persnum, stems.pres_unstressed, ger_ending)
end
end
-- Skip the long/short past participle footnotes if called from {{pt-verb}} so they don't show in the headword.
local long_pp_footnotes =
stems.short_pp_ms and base.alternant_multiword_spec.source_template ~= "pt-verb" and {long_pp_footnote} or nil
addit("pp_ms", stems.pp_ms, "", long_pp_footnotes)
if not base.pp_inv then
addit("pp_fs", stems.pp_fs, "", long_pp_footnotes)
addit("pp_mp", stems.pp_ms, "s", long_pp_footnotes)
addit("pp_fp", stems.pp_fs, "s", long_pp_footnotes)
end
if stems.short_pp_ms then
local short_pp_footnotes =
stems.short_pp_ms and base.alternant_multiword_spec.source_template ~= "pt-verb" and {short_pp_footnote} or nil
addit("short_pp_ms", stems.short_pp_ms, "", short_pp_footnotes)
if not base.pp_inv then
addit("short_pp_fs", stems.short_pp_fs, "", short_pp_footnotes)
addit("short_pp_mp", stems.short_pp_ms, "s", short_pp_footnotes)
addit("short_pp_fp", stems.short_pp_fs, "s", short_pp_footnotes)
end
end
end
local function copy_forms_to_imperatives(base)
-- Copy pres3s to imperative since they are almost always the same.
insert_forms(base, "imp_2s", iut.map_forms(base.forms.pres_3s, function(form) return form end))
if not skip_slot(base, "imp_2p") then
-- Copy pres2p to imperative 2p minus -s since they are almost always the same.
-- But not if there's an override, to avoid possibly throwing an error.
insert_forms(base, "imp_2p", iut.map_forms(base.forms.pres_2p, function(form)
if not form:find("s$") then
error("Can't derive second-person plural imperative from second-person plural present indicative " ..
"because form '" .. form .. "' doesn't end in -s")
end
return rsub(form, "s$", "")
end))
end
-- Copy subjunctives to imperatives, unless there's an override for the given slot (as with the imp_1p of [[ir]]).
for _, persnum in ipairs({"3s", "1p", "3p"}) do
local from = "pres_sub_" .. persnum
local to = "imp_" .. persnum
insert_forms(base, to, iut.map_forms(base.forms[from], function(form) return form end))
end
end
local function process_slot_overrides(base, filter_slot, reflexive_only)
local overrides = reflexive_only and base.basic_reflexive_only_overrides or base.basic_overrides
for slot, forms in pairs(overrides) do
if not filter_slot or filter_slot(slot) then
add3(base, slot, forms, "", nil, "allow overrides")
end
end
end
-- Prefix `form` with `clitic`, adding fixed text `between` between them. Add links as appropriate unless the user
-- requested no links. Check whether form already has brackets (as will be the case if the form has a fixed clitic).
local function prefix_clitic_to_form(base, clitic, between, form)
if base.alternant_multiword_spec.args.noautolinkverb then
return clitic .. between .. form
else
local clitic_pref = "[[" .. clitic .. "]]" .. between
if form:find("%[%[") then
return clitic_pref .. form
else
return clitic_pref .. "[[" .. form .. "]]"
end
end
end
-- Add the appropriate clitic pronouns in `clitics` to the forms in `base_slot`. `store_cliticized_form` is a function
-- of three arguments (clitic, formobj, cliticized_form) and should store the cliticized form for the specified clitic
-- and form object.
local function suffix_clitic_to_forms(base, base_slot, clitics, store_cliticized_form)
if not base.forms[base_slot] then
-- This can happen, e.g. in only3s/only3sp/only3p verbs.
return
end
local autolink = not base.alternant_multiword_spec.args.noautolinkverb
for _, formobj in ipairs(base.forms[base_slot]) do
for _, clitic in ipairs(clitics) do
local cliticized_form
if formobj.form:find(TEMP_MESOCLITIC_INSERTION_POINT) then
-- mesoclisis in future and conditional
local infinitive, suffix = rmatch(formobj.form, "^(.*)" .. TEMP_MESOCLITIC_INSERTION_POINT .. "(.*)$")
if not infinitive then
error("Internal error: Can't find mesoclitic insertion point in slot '" .. base_slot .. "', form '" ..
formobj.form .. "'")
end
local full_form = infinitive .. suffix
if autolink and not infinitive:find("%[%[") then
infinitive = "[[" .. infinitive .. "]]"
end
cliticized_form =
autolink and infinitive .. "-[[" .. clitic .. "]]-[[" .. full_form .. "|" .. suffix .. "]]" or
infinitive .. "-" .. clitic .. "-" .. suffix
else
local clitic_suffix = autolink and "-[[" .. clitic .. "]]" or "-" .. clitic
local form_needs_link = autolink and not formobj.form:find("%[%[")
if base_slot:find("1p$") then
-- Final -s disappears: esbaldávamos + nos -> esbaldávamo-nos, etc.
cliticized_form = formobj.form:gsub("s$", "")
if form_needs_link then
cliticized_form = "[[" .. formobj.form .. "|" .. cliticized_form .. "]]"
end
else
cliticized_form = formobj.form
if form_needs_link then
cliticized_form = "[[" .. cliticized_form .. "]]"
end
end
cliticized_form = cliticized_form .. clitic_suffix
end
store_cliticized_form(clitic, formobj, cliticized_form)
end
end
end
-- Add a reflexive pronoun or fixed clitic (FIXME: not working), as appropriate to the base forms that were generated.
-- `do_joined` means to do only the forms where the pronoun is joined to the end of the form; otherwise, do only the
-- forms where it is not joined and precedes the form.
local function add_reflexive_or_fixed_clitic_to_forms(base, do_reflexive, do_joined)
for _, slotaccel in ipairs(base.alternant_multiword_spec.verb_slots_basic) do
local slot, accel = unpack(slotaccel)
local clitic
if not do_reflexive then
clitic = base.clitic
elseif slot:find("[123]") then
local persnum = slot:match("^.*_(.-)$")
clitic = person_number_to_reflexive_pronoun[persnum]
else
clitic = "se"
end
if base.forms[slot] then
if do_reflexive and slot:find("^pp_") or slot == "infinitive_linked" then
-- do nothing with reflexive past participles or with infinitive linked (handled at the end)
elseif slot:find("^neg_imp_") then
error("Internal error: Should not have forms set for negative imperative at this stage")
else
local slot_has_suffixed_clitic = not slot:find("_sub")
-- Maybe generate non-reflexive parts and separated syntactic variants for use in {{pt-verb form of}}.
-- See comment in add_slots() above `need_special_verb_form_of_slots`. Check for do_joined so we only
-- run this code once.
if do_reflexive and do_joined and base.alternant_multiword_spec.source_template == "pt-verb form of" and
-- Skip personal variants of infinitives and gerunds so we don't think [[esbaldando]] is a
-- non-reflexive equivalent of [[esbaldando-me]].
not slot:find("infinitive_") and not slot:find("gerund_") then
-- Clone the forms because we will be destructively modifying them just below, adding the reflexive
-- pronoun.
insert_forms(base, slot .. "_non_reflexive", mw.clone(base.forms[slot]))
if slot_has_suffixed_clitic then
insert_forms(base, slot .. "_variant", iut.map_forms(base.forms[slot], function(form)
return prefix_clitic_to_form(base, clitic, " ... ", form)
end))
end
end
if slot_has_suffixed_clitic then
if do_joined then
suffix_clitic_to_forms(base, slot, {clitic},
function(clitic, formobj, cliticized_form)
formobj.form = cliticized_form
end
)
end
elseif not do_joined then
-- Add clitic as separate word before all other forms.
for _, form in ipairs(base.forms[slot]) do
form.form = prefix_clitic_to_form(base, clitic, " ", form.form)
end
end
end
end
end
end
local function handle_infinitive_linked(base)
-- Compute linked versions of potential lemma slots, for use in {{pt-verb}}.
-- We substitute the original lemma (before removing links) for forms that
-- are the same as the lemma, if the original lemma has links.
for _, slot in ipairs({"infinitive"}) do
insert_forms(base, slot .. "_linked", iut.map_forms(base.forms[slot], function(form)
if form == base.lemma and rfind(base.linked_lemma, "%[%[") then
return base.linked_lemma
else
return form
end
end))
end
end
local function generate_negative_imperatives(base)
-- Copy subjunctives to negative imperatives, preceded by "não".
for _, persnum in ipairs(neg_imp_person_number_list) do
local from = "pres_sub_" .. persnum
local to = "neg_imp_" .. persnum
insert_forms(base, to, iut.map_forms(base.forms[from], function(form)
if base.alternant_multiword_spec.args.noautolinkverb then
return "não " .. form
elseif form:find("%[%[") then
-- already linked, e.g. when reflexive
return "[[não]] " .. form
else
return "[[não]] [[" .. form .. "]]"
end
end))
end
end
-- Process specs given by the user using 'addnote[SLOTSPEC][FOOTNOTE][FOOTNOTE][...]'.
local function process_addnote_specs(base)
for _, spec in ipairs(base.addnote_specs) do
for _, slot_spec in ipairs(spec.slot_specs) do
slot_spec = "^" .. slot_spec .. "$"
for slot, forms in pairs(base.forms) do
if rfind(slot, slot_spec) then
-- To save on memory, side-effect the existing forms.
for _, form in ipairs(forms) do
form.footnotes = iut.combine_footnotes(form.footnotes, spec.footnotes)
end
end
end
end
end
end
local function add_missing_links_to_forms(base)
-- Any forms without links should get them now. Redundant ones will be stripped later.
for slot, forms in pairs(base.forms) do
for _, form in ipairs(forms) do
if not form.form:find("%[%[") then
form.form = "[[" .. form.form .. "]]"
end
end
end
end
-- Remove special characters added to future and conditional forms to indicate mesoclitic insertion points.
local function remove_mesoclitic_insertion_points(base)
for slot, forms in pairs(base.forms) do
if slot:find("^fut_") or slot:find("^cond_") then
for _, form in ipairs(forms) do
form.form = form.form:gsub(TEMP_MESOCLITIC_INSERTION_POINT, "")
end
end
end
end
-- If called from {{pt-verb}}, remove superseded forms; otherwise add a footnote indicating they are superseded.
local function process_superseded_forms(base)
if base.alternant_multiword_spec.source_template == "pt-verb" then
for slot, forms in pairs(base.forms) do
-- As an optimization, check if there are any superseded forms and don't do anything if not.
local saw_superseded = false
for _, form in ipairs(forms) do
if form.form:find(VAR_SUPERSEDED) then
saw_superseded = true
break
end
end
if saw_superseded then
base.forms[slot] = iut.flatmap_forms(base.forms[slot], function(form)
if form:find(VAR_SUPERSEDED) then
return {}
else
return {form}
end
end)
end
end
else
for slot, forms in pairs(base.forms) do
for _, form in ipairs(forms) do
if form.form:find(VAR_SUPERSEDED) then
form.footnotes = iut.combine_footnotes(form.footnotes, {"[superseded]"})
end
end
end
end
end
local function conjugate_verb(base)
for _, vowel_alt in ipairs(base.vowel_alt_stems) do
construct_stems(base, vowel_alt)
add_present_indic(base)
add_present_subj(base)
end
add_finite_non_present(base)
add_non_finite_forms(base)
-- do non-reflexive non-imperative slot overrides
process_slot_overrides(base, function(slot)
return not slot:find("^imp_") and not slot:find("^neg_imp_")
end)
-- This should happen after process_slot_overrides() in case a derived slot is based on an override
-- (as with the imp_3s of [[dar]], [[estar]]).
copy_forms_to_imperatives(base)
-- do non-reflexive positive imperative slot overrides
process_slot_overrides(base, function(slot)
return slot:find("^imp_")
end)
-- We need to add joined reflexives, then joined and non-joined clitics, then non-joined reflexives, so we get
-- [[esbalda-te]] but [[não]] [[te]] [[esbalde]].
if base.refl then
-- This should happen after remove_monosyllabic_accents() so the * marking the preservation of monosyllabic
-- accents doesn't end up in the middle of a word.
add_reflexive_or_fixed_clitic_to_forms(base, "do reflexive", "do joined")
process_slot_overrides(base, nil, "do reflexive") -- do reflexive-only slot overrides
add_reflexive_or_fixed_clitic_to_forms(base, "do reflexive", false)
end
-- This should happen after add_reflexive_or_fixed_clitic_to_forms() so negative imperatives get the reflexive pronoun
-- and clitic in them.
generate_negative_imperatives(base)
-- do non-reflexive negative imperative slot overrides
-- FIXME: What about reflexive negative imperatives?
process_slot_overrides(base, function(slot)
return slot:find("^neg_imp_")
end)
-- This should happen before add_missing_links_to_forms() so that the comparison `form == base.lemma`
-- in handle_infinitive_linked() works correctly and compares unlinked forms to unlinked forms.
handle_infinitive_linked(base)
process_addnote_specs(base)
if not base.alternant_multiword_spec.args.noautolinkverb then
add_missing_links_to_forms(base)
end
remove_mesoclitic_insertion_points(base)
process_superseded_forms(base)
end
local function parse_indicator_spec(angle_bracket_spec)
-- Store the original angle bracket spec so we can reconstruct the overall conj spec with the lemma(s) in them.
local base = {
angle_bracket_spec = angle_bracket_spec,
user_basic_overrides = {},
user_stems = {},
addnote_specs = {},
}
local function parse_err(msg)
error(msg .. ": " .. angle_bracket_spec)
end
local function fetch_footnotes(separated_group)
local footnotes
for j = 2, #separated_group - 1, 2 do
if separated_group[j + 1] ~= "" then
parse_err("Extraneous text after bracketed footnotes: '" .. table.concat(separated_group) .. "'")
end
if not footnotes then
footnotes = {}
end
table.insert(footnotes, separated_group[j])
end
return footnotes
end
local inside = angle_bracket_spec:match("^<(.*)>$")
assert(inside)
if inside == "" then
return base
end
local segments = iut.parse_balanced_segment_run(inside, "[", "]")
local dot_separated_groups = iut.split_alternating_runs(segments, "%.")
for i, dot_separated_group in ipairs(dot_separated_groups) do
local first_element = dot_separated_group[1]
if first_element == "addnote" then
local spec_and_footnotes = fetch_footnotes(dot_separated_group)
if #spec_and_footnotes < 2 then
parse_err("Spec with 'addnote' should be of the form 'addnote[SLOTSPEC][FOOTNOTE][FOOTNOTE][...]'")
end
local slot_spec = table.remove(spec_and_footnotes, 1)
local slot_spec_inside = rmatch(slot_spec, "^%[(.*)%]$")
if not slot_spec_inside then
parse_err("Internal error: slot_spec " .. slot_spec .. " should be surrounded with brackets")
end
local slot_specs = rsplit(slot_spec_inside, ",")
-- FIXME: Here, [[Module:it-verb]] called strip_spaces(). Generally we don't do this. Should we?
table.insert(base.addnote_specs, {slot_specs = slot_specs, footnotes = spec_and_footnotes})
elseif indicator_flags[first_element] then
if #dot_separated_group > 1 then
parse_err("No footnotes allowed with '" .. first_element .. "' spec")
end
if base[first_element] then
parse_err("Spec '" .. first_element .. "' specified twice")
end
base[first_element] = true
elseif rfind(first_element, ":") then
local colon_separated_groups = iut.split_alternating_runs(dot_separated_group, "%s*:%s*")
local first_element = colon_separated_groups[1][1]
if #colon_separated_groups[1] > 1 then
parse_err("Can't attach footnotes directly to '" .. first_element .. "' spec; attach them to the " ..
"colon-separated values following the initial colon")
end
if overridable_stems[first_element] then
if base.user_stems[first_element] then
parse_err("Overridable stem '" .. first_element .. "' specified twice")
end
table.remove(colon_separated_groups, 1)
base.user_stems[first_element] = overridable_stems[first_element](colon_separated_groups,
{prefix = first_element, base = base, parse_err = parse_err, fetch_footnotes = fetch_footnotes})
else -- assume a basic override; we validate further later when the possible slots are available
if base.user_basic_overrides[first_element] then
parse_err("Basic override '" .. first_element .. "' specified twice")
end
table.remove(colon_separated_groups, 1)
base.user_basic_overrides[first_element] = allow_multiple_values(colon_separated_groups,
{prefix = first_element, base = base, parse_err = parse_err, fetch_footnotes = fetch_footnotes})
end
else
local comma_separated_groups = iut.split_alternating_runs(dot_separated_group, "%s*,%s*")
for j = 1, #comma_separated_groups do
local alt = comma_separated_groups[j][1]
if not vowel_alternants[alt] then
if #comma_separated_groups == 1 then
parse_err("Unrecognized spec or vowel alternant '" .. alt .. "'")
else
parse_err("Unrecognized vowel alternant '" .. alt .. "'")
end
end
if base.vowel_alt then
for _, existing_alt in ipairs(base.vowel_alt) do
if existing_alt.form == alt then
parse_err("Vowel alternant '" .. alt .. "' specified twice")
end
end
else
base.vowel_alt = {}
end
table.insert(base.vowel_alt, {form = alt, footnotes = fetch_footnotes(comma_separated_groups[j])})
end
end
end
return base
end
-- Normalize all lemmas, substituting the pagename for blank lemmas and adding links to multiword lemmas.
local function normalize_all_lemmas(alternant_multiword_spec, head)
-- (1) Add links to all before and after text. Remember the original text so we can reconstruct the verb spec later.
if not alternant_multiword_spec.args.noautolinktext then
iut.add_links_to_before_and_after_text(alternant_multiword_spec, "remember original")
end
-- (2) Remove any links from the lemma, but remember the original form
-- so we can use it below in the 'lemma_linked' form.
iut.map_word_specs(alternant_multiword_spec, function(base)
if base.lemma == "" then
base.lemma = head
end
base.user_specified_lemma = base.lemma
base.lemma = m_links.remove_links(base.lemma)
local refl_verb = base.lemma
local verb, refl = rmatch(refl_verb, "^(.-)%-(se)$")
if not verb then
verb, refl = refl_verb, nil
end
base.user_specified_verb = verb
base.refl = refl
base.verb = base.user_specified_verb
local linked_lemma
if alternant_multiword_spec.args.noautolinkverb or base.user_specified_lemma:find("%[%[") then
linked_lemma = base.user_specified_lemma
elseif base.refl then
-- Reconstruct the linked lemma with separate links around base verb and reflexive pronoun.
linked_lemma = base.user_specified_verb == base.verb and "[[" .. base.user_specified_verb .. "]]" or
"[[" .. base.verb .. "|" .. base.user_specified_verb .. "]]"
linked_lemma = linked_lemma .. (refl and "-[[" .. refl .. "]]" or "")
else
-- Add links to the lemma so the user doesn't specifically need to, since we preserve
-- links in multiword lemmas and include links in non-lemma forms rather than allowing
-- the entire form to be a link.
linked_lemma = iut.add_links(base.user_specified_lemma)
end
base.linked_lemma = linked_lemma
end)
end
local function detect_indicator_spec(base)
if (base.only3s and 1 or 0) + (base.only3sp and 1 or 0) + (base.only3p and 1 or 0) > 1 then
error("Only one of 'only3s', 'only3sp' and 'only3p' can be specified")
end
base.forms = {}
base.stems = {}
base.basic_overrides = {}
base.basic_reflexive_only_overrides = {}
if not base.no_built_in then
for _, built_in_conj in ipairs(built_in_conjugations) do
if type(built_in_conj.match) == "function" then
base.prefix, base.non_prefixed_verb = built_in_conj.match(base.verb)
elseif built_in_conj.match:find("^%^") and rsub(built_in_conj.match, "^%^", "") == base.verb then
-- begins with ^, for exact match, and matches
base.prefix, base.non_prefixed_verb = "", base.verb
else
base.prefix, base.non_prefixed_verb = rmatch(base.verb, "^(.*)(" .. built_in_conj.match .. ")$")
end
if base.prefix then
-- we found a built-in verb
for stem, forms in pairs(built_in_conj.forms) do
if type(forms) == "function" then
forms = forms(base, base.prefix)
end
if stem:find("^refl_") then
stem = stem:gsub("^refl_", "")
if not base.alternant_multiword_spec.verb_slots_basic_map[stem] then
error("Internal error: setting for 'refl_" .. stem .. "' does not refer to a basic verb slot")
end
base.basic_reflexive_only_overrides[stem] = forms
elseif base.alternant_multiword_spec.verb_slots_basic_map[stem] then
-- an individual form override of a basic form
base.basic_overrides[stem] = forms
else
base.stems[stem] = forms
end
end
break
end
end
end
-- Override built-in-verb stems and overrides with user-specified ones.
for stem, values in pairs(base.user_stems) do
base.stems[stem] = values
end
for override, values in pairs(base.user_basic_overrides) do
if not base.alternant_multiword_spec.verb_slots_basic_map[override] then
error("Unrecognized override '" .. override .. "': " .. base.angle_bracket_spec)
end
base.basic_overrides[override] = values
end
base.prefix = base.prefix or ""
base.non_prefixed_verb = base.non_prefixed_verb or base.verb
local inf_stem, suffix = rmatch(base.non_prefixed_verb, "^(.*)([aeioô]r)$")
if not inf_stem then
error("Unrecognized infinitive: " .. base.verb)
end
base.inf_stem = inf_stem
suffix = suffix == "ôr" and "or" or suffix
base.conj = suffix
base.conj_vowel = suffix == "ar" and "á" or suffix == "ir" and "í" or "ê"
base.frontback = suffix == "ar" and "back" or "front"
if base.stems.vowel_alt then -- built-in verb with specified vowel alternation
if base.vowel_alt then
error(base.verb .. " is a recognized built-in verb, and should not have vowel alternations specified with it")
end
base.vowel_alt = iut.convert_to_general_list_form(base.stems.vowel_alt)
end
-- Propagate built-in-verb indicator flags to `base` and combine with user-specified flags.
for indicator_flag, _ in pairs(indicator_flags) do
base[indicator_flag] = base[indicator_flag] or base.stems[indicator_flag]
end
-- Convert vowel alternation indicators into stems.
local vowel_alt = base.vowel_alt or {{form = "+"}}
base.vowel_alt_stems = apply_vowel_alternations(base.inf_stem, vowel_alt)
for _, vowel_alt_stems in ipairs(base.vowel_alt_stems) do
if vowel_alt_stems.err then
error("To use '" .. vowel_alt_stems.altobj.form .. "', present stem '" .. base.prefix .. base.inf_stem .. "' " ..
vowel_alt_stems.err)
end
end
end
local function detect_all_indicator_specs(alternant_multiword_spec)
-- Propagate some settings up; some are used internally, others by [[Module:pt-headword]].
iut.map_word_specs(alternant_multiword_spec, function(base)
-- Internal indicator flags. Do these before calling detect_indicator_spec() because add_slots() uses them.
for _, prop in ipairs { "refl", "clitic" } do
if base[prop] then
alternant_multiword_spec[prop] = true
end
end
base.alternant_multiword_spec = alternant_multiword_spec
end)
add_slots(alternant_multiword_spec)
alternant_multiword_spec.vowel_alt = {}
iut.map_word_specs(alternant_multiword_spec, function(base)
detect_indicator_spec(base)
-- User-specified indicator flags. Do these after calling detect_indicator_spec() because the latter may set these
-- indicators for built-in verbs.
for prop, _ in pairs(indicator_flags) do
if base[prop] then
alternant_multiword_spec[prop] = true
end
end
-- Vowel alternants. Do these after calling detect_indicator_spec() because the latter sets base.vowel_alt for
-- built-in verbs.
if base.vowel_alt then
for _, altobj in ipairs(base.vowel_alt) do
m_table.insertIfNot(alternant_multiword_spec.vowel_alt, altobj.form)
end
end
end)
end
local function add_categories_and_annotation(alternant_multiword_spec, base, multiword_lemma)
local function insert_ann(anntype, value)
m_table.insertIfNot(alternant_multiword_spec.annotation[anntype], value)
end
local function insert_cat(cat, also_when_multiword)
-- Don't place multiword terms in categories like 'Portuguese verbs ending in -ar' to avoid spamming the
-- categories with such terms.
if also_when_multiword or not multiword_lemma then
m_table.insertIfNot(alternant_multiword_spec.categories, "Portuguese " .. cat)
end
end
if check_for_red_links and alternant_multiword_spec.source_template == "pt-conj" and multiword_lemma then
for _, slot_and_accel in ipairs(alternant_multiword_spec.all_verb_slots) do
local slot = slot_and_accel[1]
local forms = base.forms[slot]
local must_break = false
if forms then
for _, form in ipairs(forms) do
if not form.form:find("%[%[") then
local title = mw.title.new(form.form)
if title and not title.exists then
insert_cat("verbs with red links in their inflection tables")
must_break = true
break
end
end
end
end
if must_break then
break
end
end
end
insert_cat("verbs ending in -" .. base.conj)
if base.irreg then
insert_ann("irreg", "irregular")
insert_cat("irregular verbs")
else
insert_ann("irreg", "regular")
end
if base.only3s then
insert_ann("defective", "impersonal")
insert_cat("impersonal verbs")
elseif base.only3sp then
insert_ann("defective", "third-person only")
insert_cat("third-person-only verbs")
elseif base.only3p then
insert_ann("defective", "third-person plural only")
insert_cat("third-person-plural-only verbs")
elseif base.no_pres_stressed or base.no_pres1_and_sub then
insert_ann("defective", "defective")
insert_cat("defective verbs")
else
insert_ann("defective", "regular")
end
if base.stems.short_pp then
insert_ann("short_pp", "irregular short past participle")
insert_cat("verbs with irregular short past participle")
else
insert_ann("short_pp", "regular")
end
if base.clitic then
insert_cat("verbs with lexical clitics")
end
if base.refl then
insert_cat("reflexive verbs")
end
if base.e_ei_cat then
insert_ann("vowel_alt", "''e'' becomes ''ei'' when stressed")
insert_cat("verbs with e becoming ei when stressed")
elseif not base.vowel_alt then
insert_ann("vowel_alt", "non-alternating")
else
for _, alt in ipairs(base.vowel_alt) do
if alt.form == "+" then
insert_ann("vowel_alt", "non-alternating")
else
insert_ann("vowel_alt", vowel_alternant_to_desc[alt.form])
insert_cat("verbs with " .. vowel_alternant_to_cat[alt.form])
end
end
end
local cons_alt = base.stems.cons_alt
if cons_alt == nil then
if base.conj == "ar" then
if base.inf_stem:find("ç$") then
cons_alt = "c-ç"
elseif base.inf_stem:find("c$") then
cons_alt = "c-qu"
elseif base.inf_stem:find("g$") then
cons_alt = "g-gu"
end
else
if base.no_pres_stressed or base.no_pres1_and_sub then
cons_alt = nil -- no e.g. c-ç alternation in this case
elseif base.inf_stem:find("c$") then
cons_alt = "c-ç"
elseif base.inf_stem:find("qu$") then
cons_alt = "c-qu"
elseif base.inf_stem:find("g$") then
cons_alt = "g-j"
elseif base.inf_stem:find("gu$") then
cons_alt = "g-gu"
end
end
end
if cons_alt then
local desc = cons_alt .. " alternation"
insert_ann("cons_alt", desc)
insert_cat("verbs with " .. desc)
else
insert_ann("cons_alt", "non-alternating")
end
end
-- Compute the categories to add the verb to, as well as the annotation to display in the
-- conjugation title bar. We combine the code to do these functions as both categories and
-- title bar contain similar information.
local function compute_categories_and_annotation(alternant_multiword_spec)
alternant_multiword_spec.categories = {}
local ann = {}
alternant_multiword_spec.annotation = ann
ann.irreg = {}
ann.short_pp = {}
ann.defective = {}
ann.vowel_alt = {}
ann.cons_alt = {}
local multiword_lemma = false
for _, form in ipairs(alternant_multiword_spec.forms.infinitive) do
if form.form:find(" ") then
multiword_lemma = true
break
end
end
iut.map_word_specs(alternant_multiword_spec, function(base)
add_categories_and_annotation(alternant_multiword_spec, base, multiword_lemma)
end)
local ann_parts = {}
local irreg = table.concat(ann.irreg, " or ")
if irreg ~= "" and irreg ~= "regular" then
table.insert(ann_parts, irreg)
end
local short_pp = table.concat(ann.short_pp, " or ")
if short_pp ~= "" and short_pp ~= "regular" then
table.insert(ann_parts, short_pp)
end
local defective = table.concat(ann.defective, " or ")
if defective ~= "" and defective ~= "regular" then
table.insert(ann_parts, defective)
end
local vowel_alt = table.concat(ann.vowel_alt, " or ")
if vowel_alt ~= "" and vowel_alt ~= "non-alternating" then
table.insert(ann_parts, vowel_alt)
end
local cons_alt = table.concat(ann.cons_alt, " or ")
if cons_alt ~= "" and cons_alt ~= "non-alternating" then
table.insert(ann_parts, cons_alt)
end
alternant_multiword_spec.annotation = table.concat(ann_parts, "; ")
end
local function show_forms(alternant_multiword_spec)
local lemmas = alternant_multiword_spec.forms.infinitive
alternant_multiword_spec.lemmas = lemmas -- save for later use in make_table()
if alternant_multiword_spec.forms.short_pp_ms then
alternant_multiword_spec.has_short_pp = true
end
local reconstructed_verb_spec = iut.reconstruct_original_spec(alternant_multiword_spec)
local function transform_accel_obj(slot, formobj, accel_obj)
-- No accelerators for negative imperatives, which are always multiword and derived directly from the
-- present subjunctive.
if slot:find("^neg_imp") then
return nil
end
if accel_obj then
if slot:find("^pp_") then
accel_obj.form = slot
elseif slot == "gerund" then
accel_obj.form = "gerund-" .. reconstructed_verb_spec
else
accel_obj.form = "verb-form-" .. reconstructed_verb_spec
end
end
return accel_obj
end
-- Italicize superseded forms.
local function generate_link(data)
local formval_for_link = data.form.formval_for_link
if formval_for_link:find(VAR_SUPERSEDED) then
formval_for_link = formval_for_link:gsub(VAR_SUPERSEDED, "")
return m_links.full_link({lang = lang, term = formval_for_link, tr = "-", accel = data.form.accel_obj},
"term") .. iut.get_footnote_text(data.form.footnotes, data.footnote_obj)
end
end
local props = {
lang = lang,
lemmas = lemmas,
transform_accel_obj = transform_accel_obj,
canonicalize = function(form) return export.remove_variant_codes(form, "keep superseded") end,
generate_link = generate_link,
slot_list = alternant_multiword_spec.verb_slots_basic,
}
iut.show_forms(alternant_multiword_spec.forms, props)
alternant_multiword_spec.footnote_basic = alternant_multiword_spec.forms.footnote
end
local notes_template = [=[
<div style="width:100%;text-align:left;background:#d9ebff">
<div style="display:inline-block;text-align:left;padding-left:1em;padding-right:1em">
{footnote}
</div></div>]=]
local basic_table = [=[
{description}<div class="NavFrame">
<div class="NavHead" align=center> Conjugation of {title} (See [[Appendix:Portuguese verbs]])</div>
<div class="NavContent" align="left">
{\op}| class="inflection-table" style="background:#F6F6F6; text-align: left; border: 1px solid #999999;" cellpadding="3" cellspacing="0"
|-
! style="border: 1px solid #999999; background:#B0B0B0" rowspan="2" |
! style="border: 1px solid #999999; background:#D0D0D0" colspan="3" | Singular
! style="border: 1px solid #999999; background:#D0D0D0" colspan="3" | Plural
|-
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | First-person<br />(<<eu>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Second-person<br />(<<tu>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Third-person<br />(<<ele>> / <<ela>> / <<você>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | First-person<br />(<<nós>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Second-person<br />(<<vós>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Third-person<br />(<<eles>> / <<elas>> / <<vocês>>)
|-
! style="border: 1px solid #999999; background:#c498ff" colspan="7" | ''<span title="infinitivo">Infinitive</span>''
|-
! style="border: 1px solid #999999; background:#a478df" | '''<span title="infinitivo impessoal">Impersonal</span>'''
| style="border: 1px solid #999999; vertical-align: top;" colspan="6" | {infinitive}
|-
! style="border: 1px solid #999999; background:#a478df" | '''<span title="infinitivo pessoal">Personal</span>'''
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_3p}
|-
! style="border: 1px solid #999999; background:#98ffc4" colspan="7" | ''<span title="gerúndio">Gerund</span>''
|-
| style="border: 1px solid #999999; background:#78dfa4" |
| style="border: 1px solid #999999; vertical-align: top;" colspan="6" | {gerund}
|-{pp_clause}
! style="border: 1px solid #999999; background:#d0dff4" colspan="7" | ''<span title="indicativo">Indicative</span>''
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="presente">Present</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pres_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito imperfeito">Imperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {impf_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito perfeito">Preterite</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pret_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito mais-que-perfeito simples">Pluperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {plup_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="futuro do presente">Future</span>
| style="border: 1px solid #999999; vertical-align: top;" | {fut_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="condicional / futuro do pretérito">Conditional</span>
| style="border: 1px solid #999999; vertical-align: top;" | {cond_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_3p}
|-
! style="border: 1px solid #999999; background:#d0f4d0" colspan="7" | ''<span title="conjuntivo (pt) / subjuntivo (br)">Subjunctive</span>''
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title=" presente do conjuntivo (pt) / subjuntivo (br)">Present</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_3p}
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title="pretérito imperfeito do conjuntivo (pt) / subjuntivo (br)">Imperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_3p}
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title="futuro do conjuntivo (pt) / subjuntivo (br)">Future</span>
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_3p}
|-
! style="border: 1px solid #999999; background:#f4e4d0" colspan="7" | ''<span title="imperativo">Imperative</span>''
|-
! style="border: 1px solid #999999; background:#d4c4b0" | <span title="imperativo afirmativo">Affirmative</span>
| style="border: 1px solid #999999; vertical-align: top;" rowspan="2" |
| style="border: 1px solid #999999; vertical-align: top;" | {imp_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_3p}
|-
! style="border: 1px solid #999999; background:#d4c4b0" | <span title="imperativo negativo">Negative</span> (<<não>>)
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_3p}
|{\cl}{notes_clause}</div></div>]=]
local double_pp_template = [=[
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio irregular">Short past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_fp}
|-
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio regular">Long past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fp}
|-]=]
local single_pp_template = [=[
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio passado">Past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fp}
|-]=]
local function make_table(alternant_multiword_spec)
local forms = alternant_multiword_spec.forms
forms.title = link_term(alternant_multiword_spec.lemmas[1].form)
if alternant_multiword_spec.annotation ~= "" then
forms.title = forms.title .. " (" .. alternant_multiword_spec.annotation .. ")"
end
forms.description = ""
-- Format the table.
forms.footnote = alternant_multiword_spec.footnote_basic
forms.notes_clause = forms.footnote ~= "" and format(notes_template, forms) or ""
-- has_short_pp is computed in show_forms().
local pp_template = alternant_multiword_spec.has_short_pp and double_pp_template or single_pp_template
forms.pp_clause = format(pp_template, forms)
local table_with_pronouns = rsub(basic_table, "<<(.-)>>", link_term)
return format(table_with_pronouns, forms)
end
-- Externally callable function to parse and conjugate a verb given user-specified arguments.
-- Return value is WORD_SPEC, an object where the conjugated forms are in `WORD_SPEC.forms`
-- for each slot. If there are no values for a slot, the slot key will be missing. The value
-- for a given slot is a list of objects {form=FORM, footnotes=FOOTNOTES}.
function export.do_generate_forms(args, source_template, headword_head)
local PAGENAME = mw.title.getCurrentTitle().text
local function in_template_space()
return mw.title.getCurrentTitle().nsText == "Template"
end
-- Determine the verb spec we're being asked to generate the conjugation of. This may be taken from the
-- current page title or the value of |pagename=; but not when called from {{pt-verb form of}}, where the
-- page title is a non-lemma form. Note that the verb spec may omit the infinitive; e.g. it may be "<i-e>".
-- For this reason, we use the value of `pagename` computed here down below, when calling normalize_all_lemmas().
local pagename = source_template ~= "pt-verb form of" and args.pagename or PAGENAME
local head = headword_head or pagename
local arg1 = args[1]
if not arg1 then
if (pagename == "pt-conj" or pagename == "pt-verb") and in_template_space() then
arg1 = "cergir<i-e,i>"
elseif pagename == "pt-verb form of" and in_template_space() then
arg1 = "amar"
else
arg1 = "<>"
end
end
-- When called from {{pt-verb form of}}, determine the non-lemma form whose inflections we're being asked to
-- determine. This normally comes from the page title or the value of |pagename=.
local verb_form_of_form
if source_template == "pt-verb form of" then
verb_form_of_form = args.pagename
if not verb_form_of_form then
if PAGENAME == "pt-verb form of" and in_template_space() then
verb_form_of_form = "ame"
else
verb_form_of_form = PAGENAME
end
end
end
local incorporated_headword_head_into_lemma = false
if arg1:find("^<.*>$") then -- missing lemma
if head:find(" ") then
-- If multiword lemma, try to add arg spec after the first word.
-- Try to preserve the brackets in the part after the verb, but don't do it
-- if there aren't the same number of left and right brackets in the verb
-- (which means the verb was linked as part of a larger expression).
local refl_clitic_verb, post = rmatch(head, "^(.-)( .*)$")
local left_brackets = rsub(refl_clitic_verb, "[^%[]", "")
local right_brackets = rsub(refl_clitic_verb, "[^%]]", "")
if #left_brackets == #right_brackets then
arg1 = iut.remove_redundant_links(refl_clitic_verb) .. arg1 .. post
incorporated_headword_head_into_lemma = true
else
-- Try again using the form without links.
local linkless_head = m_links.remove_links(head)
if linkless_head:find(" ") then
refl_clitic_verb, post = rmatch(linkless_head, "^(.-)( .*)$")
arg1 = refl_clitic_verb .. arg1 .. post
else
error("Unable to incorporate <...> spec into explicit head due to a multiword linked verb or " ..
"unbalanced brackets; please include <> explicitly: " .. arg1)
end
end
else
-- Will be incorporated through `head` below in the call to normalize_all_lemmas().
incorporated_headword_head_into_lemma = true
end
end
local function split_bracketed_runs_into_words(bracketed_runs)
return iut.split_alternating_runs(bracketed_runs, " ", "preserve splitchar")
end
local parse_props = {
parse_indicator_spec = parse_indicator_spec,
-- Split words only on spaces, not on hyphens, because that messes up reflexive verb parsing.
split_bracketed_runs_into_words = split_bracketed_runs_into_words,
allow_default_indicator = true,
allow_blank_lemma = true,
}
local alternant_multiword_spec = iut.parse_inflected_text(arg1, parse_props)
alternant_multiword_spec.pos = pos or "verbs"
alternant_multiword_spec.args = args
alternant_multiword_spec.source_template = source_template
alternant_multiword_spec.verb_form_of_form = verb_form_of_form
alternant_multiword_spec.incorporated_headword_head_into_lemma = incorporated_headword_head_into_lemma
normalize_all_lemmas(alternant_multiword_spec, head)
detect_all_indicator_specs(alternant_multiword_spec)
local inflect_props = {
slot_list = alternant_multiword_spec.all_verb_slots,
inflect_word_spec = conjugate_verb,
get_variants = function(form) return rsub(form, not_var_code_c, "") end,
-- We add links around the generated verbal forms rather than allow the entire multiword
-- expression to be a link, so ensure that user-specified links get included as well.
include_user_specified_links = true,
}
iut.inflect_multiword_or_alternant_multiword_spec(alternant_multiword_spec, inflect_props)
-- Remove redundant brackets around entire forms.
for slot, forms in pairs(alternant_multiword_spec.forms) do
for _, form in ipairs(forms) do
form.form = iut.remove_redundant_links(form.form)
end
end
compute_categories_and_annotation(alternant_multiword_spec)
if args.json and source_template == "pt-conj" then
return export.remove_variant_codes(require("Module:JSON").toJSON(alternant_multiword_spec.forms))
end
return alternant_multiword_spec
end
-- Entry point for {{pt-conj}}. Template-callable function to parse and conjugate a verb given
-- user-specified arguments and generate a displayable table of the conjugated forms.
function export.show(frame)
local parent_args = frame:getParent().args
local params = {
[1] = {},
["noautolinktext"] = {type = "boolean"},
["noautolinkverb"] = {type = "boolean"},
["pagename"] = {}, -- for testing/documentation pages
["json"] = {type = "boolean"}, -- for bot use
}
local args = require("Module:parameters").process(parent_args, params)
local alternant_multiword_spec = export.do_generate_forms(args, "pt-conj")
if type(alternant_multiword_spec) == "string" then
-- JSON return value
return alternant_multiword_spec
end
show_forms(alternant_multiword_spec)
return make_table(alternant_multiword_spec) ..
require("Module:utilities").format_categories(alternant_multiword_spec.categories, lang, nil, nil, force_cat)
end
return export
donscgok7v6jgxlbbd3vvj0ekfoi1ub
193437
193426
2024-11-21T10:25:30Z
Lee
19
"Template" සිට "සැකිල්ල" වෙතට
193437
Scribunto
text/plain
local export = {}
--[=[
Authorship: Ben Wing <benwing2>
]=]
--[=[
TERMINOLOGY:
-- "slot" = A particular combination of tense/mood/person/number/etc.
Example slot names for verbs are "pres_1s" (present indicative first-person singular), "pres_sub_2s" (present
subjunctive second-person singular) "impf_sub_3p" (imperfect subjunctive third-person plural).
Each slot is filled with zero or more forms.
-- "form" = The conjugated Portuguese form representing the value of a given slot.
-- "lemma" = The dictionary form of a given Portuguese term. For Portuguese, always the infinitive.
]=]
--[=[
FIXME:
--"i-e" alternation doesn't work properly when the stem comes with a hiatus in it.
--]=]
local force_cat = false -- set to true for debugging
local check_for_red_links = false -- set to false for debugging
local lang = require("Module:languages").getByCode("pt")
local m_str_utils = require("Module:string utilities")
local m_links = require("Module:links")
local m_table = require("Module:table")
local iut = require("Module:inflection utilities")
local com = require("Module:pt-common")
local format = m_str_utils.format
local remove_final_accent = com.remove_final_accent
local rfind = m_str_utils.find
local rmatch = m_str_utils.match
local rsplit = m_str_utils.split
local rsub = com.rsub
local u = m_str_utils.char
local function link_term(term)
return m_links.full_link({ lang = lang, term = term }, "term")
end
local V = com.V -- vowel regex class
local AV = com.AV -- accented vowel regex class
local C = com.C -- consonant regex class
local AC = u(0x0301) -- acute = ́
local TEMPC1 = u(0xFFF1) -- temporary character used for consonant substitutions
local TEMP_MESOCLITIC_INSERTION_POINT = u(0xFFF2) -- temporary character used to mark the mesoclitic insertion point
local VAR_BR = u(0xFFF3) -- variant code for Brazil
local VAR_PT = u(0xFFF4) -- variant code for Portugal
local VAR_SUPERSEDED = u(0xFFF5) -- variant code for superseded forms
local VAR_NORMAL = u(0xFFF6) -- variant code for non-superseded forms
local all_var_codes = VAR_BR .. VAR_PT .. VAR_SUPERSEDED .. VAR_NORMAL
local var_codes_no_superseded = VAR_BR .. VAR_PT .. VAR_NORMAL
local var_code_c = "[" .. all_var_codes .. "]"
local var_code_no_superseded_c = "[" .. var_codes_no_superseded .. "]"
local not_var_code_c = "[^" .. all_var_codes .. "]"
-- Export variant codes for use in [[Module:pt-inflections]].
export.VAR_BR = VAR_BR
export.VAR_PT = VAR_PT
export.VAR_SUPERSEDED = VAR_SUPERSEDED
export.VAR_NORMAL = VAR_NORMAL
local short_pp_footnote = "[usually used with auxiliary verbs " .. link_term("ser") .. " and " .. link_term("estar") .. "]"
local long_pp_footnote = "[usually used with auxiliary verbs " .. link_term("haver") .. " and " .. link_term("ter") .. "]"
--[=[
Vowel alternations:
<i-e>: 'i' in pres1s and the whole present subjunctive; 'e' elsewhere when stressed. Generally 'e' otherwise when
unstressed. E.g. [[sentir]], [[conseguir]] (the latter additionally with 'gu-g' alternation).
<u-o>: 'u' in pres1s and the whole present subjunctive; 'o' elsewhere when stressed. Either 'o' or 'u' otherwise when
unstressed. E.g. [[dormir]], [[subir]].
<i>: 'i' whenever stressed (in the present singular and third plural) and throughout the whole present subjunctive.
Otherwise 'e'. E.g. [[progredir]], also [[premir]] per Priberam.
<u>: 'u' whenever stressed (in the present singular and third plural) and throughout the whole present subjunctive.
Otherwise 'o'. E.g. [[polir]], [[extorquir]] (the latter also <u-o>).
<í>: The last 'i' of the stem (excluding stem-final 'i') becomes 'í' when stressed. E.g.:
* [[proibir]] ('proíbo, proíbe(s), proíbem, proíba(s), proíbam')
* [[faiscar]] ('faísco, faísca(s), faíscam, faísque(s), faísquem' also with 'c-qu' alternation)
* [[homogeneizar]] ('homogeneízo', etc.)
* [[mobiliar]] ('mobílio', etc.; note here the final -i is ignored when determining which vowel to stress)
* [[tuitar]] ('tuíto', etc.)
<ú>: The last 'u' of the stem (excluding stem-final 'u') becomes 'ú' when stressed. E.g.:
* [[reunir]] ('reúno, reúne(s), reúnem, reúna(s), reúnam')
* [[esmiuçar]] ('esmiúço, esmiúça(s), esmiúça, esmiúce(s), esmiúcem' also with 'ç-c' alternation)
* [[reusar]] ('reúso, reúsa(s), reúsa, reúse(s), reúsem')
* [[saudar]] ('saúdo, saúda(s), saúda, saúde(s), saúdem')
]=]
local vowel_alternants = m_table.listToSet({"i-e", "i", "í", "u-o", "u", "ú", "ei", "+"})
local vowel_alternant_to_desc = {
["i-e"] = "''i-e'' alternation in present singular",
["i"] = "''e'' becomes ''i'' when stressed",
["í"] = "''i'' becomes ''í'' when stressed",
["u-o"] = "''u-o'' alternation in present singular",
["u"] = "''o'' becomes ''u'' when stressed",
["ú"] = "''u'' becomes ''ú'' when stressed",
["ei"] = "''i'' becomes ''ei'' when stressed",
}
local vowel_alternant_to_cat = {
["i-e"] = "i-e alternation in present singular",
["i"] = "e becoming i when stressed",
["í"] = "i becoming í when stressed",
["u-o"] = "u-o alternation in present singular",
["u"] = "o becoming u when stressed",
["ú"] = "u becoming ú when stressed",
["ei"] = "i becoming ei when stressed",
}
local all_persons_numbers = {
["1s"] = "1|s",
["2s"] = "2|s",
["3s"] = "3|s",
["1p"] = "1|p",
["2p"] = "2|p",
["3p"] = "3|p",
}
local person_number_list = {"1s", "2s", "3s", "1p", "2p", "3p"}
local imp_person_number_list = {"2s", "3s", "1p", "2p", "3p"}
local neg_imp_person_number_list = {"2s", "3s", "1p", "2p", "3p"}
person_number_to_reflexive_pronoun = {
["1s"] = "me",
["2s"] = "te",
["3s"] = "se",
["1p"] = "nos",
["2p"] = "vos",
["3p"] = "se",
}
local indicator_flags = m_table.listToSet {
"no_pres_stressed", "no_pres1_and_sub",
"only3s", "only3sp", "only3p",
"pp_inv", "irreg", "no_built_in", "e_ei_cat",
}
-- Remove any variant codes e.g. VAR_BR, VAR_PT, VAR_SUPERSEDED. Needs to be called from [[Module:pt-headword]] on the
-- output of do_generate_forms(). `keep_superseded` leaves VAR_SUPERSEDED; used in the `canonicalize` function of
-- show_forms() because we then process and remove it in `generate_forms`. FIXME: Use metadata for this once it's
-- supported in [[Module:inflection utilities]].
function export.remove_variant_codes(form, keep_superseded)
return rsub(form, keep_superseded and var_code_no_superseded_c or var_code_c, "")
end
-- Initialize all the slots for which we generate forms.
local function add_slots(alternant_multiword_spec)
-- "Basic" slots: All slots that go into the regular table (not the reflexive form-of table).
alternant_multiword_spec.verb_slots_basic = {
{"infinitive", "inf"},
{"infinitive_linked", "inf"},
{"gerund", "ger"},
{"short_pp_ms", "short|m|s|past|part"},
{"short_pp_fs", "short|f|s|past|part"},
{"short_pp_mp", "short|m|p|past|part"},
{"short_pp_fp", "short|f|p|past|part"},
{"pp_ms", "m|s|past|part"},
{"pp_fs", "f|s|past|part"},
{"pp_mp", "m|p|past|part"},
{"pp_fp", "f|p|past|part"},
}
-- Special slots used to handle non-reflexive parts of reflexive verbs in {{pt-verb form of}}.
-- For example, for a reflexive-only verb like [[esbaldar-se]], we want to be able to use {{pt-verb form of}} on
-- [[esbalde]] (which should mention that it is a part of 'me esbalde', first-person singular present subjunctive,
-- and 'se esbalde', third-person singular present subjunctive) or on [[esbaldamos]] (which should mention that it
-- is a part of 'esbaldamo-nos', first-person plural present indicative or preterite). Similarly, we want to use
-- {{pt-verb form of}} on [[esbaldando]] (which should mention that it is a part of 'se ... esbaldando', syntactic
-- variant of [[esbaldando-se]], which is the gerund of [[esbaldar-se]]). To do this, we need to be able to map
-- non-reflexive parts like [[esbalde]], [[esbaldamos]], [[esbaldando]], etc. to their reflexive equivalent(s), to
-- the tag(s) of the equivalent(s), and, in the case of forms like [[esbaldando]], [[esbaldar]] and imperatives, to
-- the separated syntactic variant of the verb+clitic combination. We do this by creating slots for the
-- non-reflexive part equivalent of each basic reflexive slot, and for the separated syntactic-variant equivalent
-- of each basic reflexive slot that is formed of verb+clitic. We use slots in this way to deal with multiword
-- lemmas. Note that we run into difficulties mapping between reflexive verbs, non-reflexive part equivalents, and
-- separated syntactic variants if a slot contains more than one form. To handle this, if there are the same number
-- of forms in two slots we're trying to match up, we assume the forms match one-to-one; otherwise we don't match up
-- the two slots (which means {{pt-verb form of}} won't work in this case, but such a case is extremely rare and not
-- worth worrying about). Alternatives that handle this "properly" are significantly more complicated and require
-- non-trivial modifications to [[Module:inflection utilities]].
local need_special_verb_form_of_slots = alternant_multiword_spec.source_template == "pt-verb form of" and
alternant_multiword_spec.refl
if need_special_verb_form_of_slots then
alternant_multiword_spec.verb_slots_reflexive_verb_form_of = {
{"infinitive_non_reflexive", "-"},
{"infinitive_variant", "-"},
{"gerund_non_reflexive", "-"},
{"gerund_variant", "-"},
}
else
alternant_multiword_spec.verb_slots_reflexive_verb_form_of = {}
end
-- Add entries for a slot with person/number variants.
-- `verb_slots` is the table to add to.
-- `slot_prefix` is the prefix of the slot, typically specifying the tense/aspect.
-- `tag_suffix` is a string listing the set of inflection tags to add after the person/number tags.
-- `person_number_list` is a list of the person/number slot suffixes to add to `slot_prefix`.
local function add_personal_slot(verb_slots, slot_prefix, tag_suffix, person_number_list)
for _, persnum in ipairs(person_number_list) do
local persnum_tag = all_persons_numbers[persnum]
local slot = slot_prefix .. "_" .. persnum
local accel = persnum_tag .. "|" .. tag_suffix
table.insert(verb_slots, {slot, accel})
end
end
-- Add a personal slot (i.e. a slot with person/number variants) to `verb_slots_basic`.
local function add_basic_personal_slot(slot_prefix, tag_suffix, person_number_list, no_special_verb_form_of_slot)
add_personal_slot(alternant_multiword_spec.verb_slots_basic, slot_prefix, tag_suffix, person_number_list)
-- Add special slots for handling non-reflexive parts of reflexive verbs in {{pt-verb form of}}.
-- See comment above in `need_special_verb_form_of_slots`.
if need_special_verb_form_of_slots and not no_special_verb_form_of_slot then
for _, persnum in ipairs(person_number_list) do
local persnum_tag = all_persons_numbers[persnum]
local basic_slot = slot_prefix .. "_" .. persnum
local accel = persnum_tag .. "|" .. tag_suffix
table.insert(alternant_multiword_spec.verb_slots_reflexive_verb_form_of, {basic_slot .. "_non_reflexive", "-"})
end
end
end
add_basic_personal_slot("pres", "pres|ind", person_number_list)
add_basic_personal_slot("impf", "impf|ind", person_number_list)
add_basic_personal_slot("pret", "pret|ind", person_number_list)
add_basic_personal_slot("plup", "plup|ind", person_number_list)
add_basic_personal_slot("fut", "fut|ind", person_number_list)
add_basic_personal_slot("cond", "cond", person_number_list)
add_basic_personal_slot("pres_sub", "pres|sub", person_number_list)
add_basic_personal_slot("impf_sub", "impf|sub", person_number_list)
add_basic_personal_slot("fut_sub", "fut|sub", person_number_list)
add_basic_personal_slot("imp", "imp", imp_person_number_list)
add_basic_personal_slot("pers_inf", "pers|inf", person_number_list)
-- Don't need special non-reflexive-part slots because the negative imperative is multiword, of which the
-- individual words are 'não' + subjunctive.
add_basic_personal_slot("neg_imp", "neg|imp", neg_imp_person_number_list, "no special verb form of")
-- Don't need special non-reflexive-part slots because we don't want [[esbaldando]] mapping to [[esbaldando-me]]
-- (only [[esbaldando-se]]) or [[esbaldar]] mapping to [[esbaldar-me]] (only [[esbaldar-se]]).
add_basic_personal_slot("infinitive", "inf", person_number_list, "no special verb form of")
add_basic_personal_slot("gerund", "ger", person_number_list, "no special verb form of")
-- Generate the list of all slots.
alternant_multiword_spec.all_verb_slots = {}
for _, slot_and_accel in ipairs(alternant_multiword_spec.verb_slots_basic) do
table.insert(alternant_multiword_spec.all_verb_slots, slot_and_accel)
end
for _, slot_and_accel in ipairs(alternant_multiword_spec.verb_slots_reflexive_verb_form_of) do
table.insert(alternant_multiword_spec.all_verb_slots, slot_and_accel)
end
alternant_multiword_spec.verb_slots_basic_map = {}
for _, slotaccel in ipairs(alternant_multiword_spec.verb_slots_basic) do
local slot, accel = unpack(slotaccel)
alternant_multiword_spec.verb_slots_basic_map[slot] = accel
end
end
local overridable_stems = {}
local function allow_multiple_values(separated_groups, data)
local retvals = {}
for _, separated_group in ipairs(separated_groups) do
local footnotes = data.fetch_footnotes(separated_group)
local retval = {form = separated_group[1], footnotes = footnotes}
table.insert(retvals, retval)
end
return retvals
end
local function simple_choice(choices)
return function(separated_groups, data)
if #separated_groups > 1 then
data.parse_err("For spec '" .. data.prefix .. ":', only one value currently allowed")
end
if #separated_groups[1] > 1 then
data.parse_err("For spec '" .. data.prefix .. ":', no footnotes currently allowed")
end
local choice = separated_groups[1][1]
if not m_table.contains(choices, choice) then
data.parse_err("For spec '" .. data.prefix .. ":', saw value '" .. choice .. "' but expected one of '" ..
table.concat(choices, ",") .. "'")
end
return choice
end
end
for _, overridable_stem in ipairs {
"pres_unstressed",
"pres_stressed",
"pres1_and_sub",
-- Don't include pres1; use pres_1s if you need to override just that form
"impf",
"full_impf",
"pret_base",
"pret",
{"pret_conj", simple_choice({"irreg", "ar", "er", "ir"}) },
"fut",
"cond",
"pres_sub_stressed",
"pres_sub_unstressed",
{"sub_conj", simple_choice({"ar", "er"}) },
"plup",
"impf_sub",
"fut_sub",
"pers_inf",
"pp",
"short_pp",
} do
if type(overridable_stem) == "string" then
overridable_stems[overridable_stem] = allow_multiple_values
else
local stem, validator = unpack(overridable_stem)
overridable_stems[stem] = validator
end
end
-- Useful as the value of the `match` property of a built-in verb. `main_verb_spec` is a Lua pattern that should match
-- the non-prefixed part of a verb, and `prefix_specs` is a list of Lua patterns that should match the prefixed part of
-- a verb. If a prefix spec is preceded by ^, it must match exactly at the beginning of the verb; otherwise, additional
-- prefixes (e.g. re-, des-) may precede. Return the prefix and main verb.
local function match_against_verbs(main_verb_spec, prefix_specs)
return function(verb)
for _, prefix_spec in ipairs(prefix_specs) do
if prefix_spec:find("^%^") then
-- must match exactly
prefix_spec = prefix_spec:gsub("^%^", "")
if prefix_spec == "" then
-- We can't use the second branch of the if-else statement because an empty () returns the current position
-- in rmatch().
local main_verb = rmatch(verb, "^(" .. main_verb_spec .. ")$")
if main_verb then
return "", main_verb
end
else
local prefix, main_verb = rmatch(verb, "^(" .. prefix_spec .. ")(" .. main_verb_spec .. ")$")
if prefix then
return prefix, main_verb
end
end
else
local prefix, main_verb = rmatch(verb, "^(.*" .. prefix_spec .. ")(" .. main_verb_spec .. ")$")
if prefix then
return prefix, main_verb
end
end
end
return nil
end
end
--[=[
Built-in (usually irregular) conjugations.
Each entry is processed in turn and consists of an object with two fields:
1. match=: Specifies the built-in verbs that match this object.
2. forms=: Specifies the built-in stems and forms for these verbs.
The value of match= is either a string beginning with "^" (match only the specified verb), a string not beginning
with "^" (match any verb ending in that string), or a function that is passed in the verb and should return the prefix
of the verb if it matches, otherwise nil. The function match_against_verbs() is provided to facilitate matching a set
of verbs with a common ending and specific prefixes (e.g. [[ter]] and [[ater]] but not [[abater]], etc.).
The value of forms= is a table specifying stems and individual override forms. Each key of the table names either a
stem (e.g. `pres_stressed`), a stem property (e.g. `vowel_alt`) or an individual override form (e.g. `pres_1s`).
Each value of a stem can either be a string (a single stem), a list of strings, or a list of objects of the form
{form = STEM, footnotes = {FOONOTES}}. Each value of an individual override should be of exactly the same form except
that the strings specify full forms rather than stems. The values of a stem property depend on the specific property
but are generally strings or booleans.
In order to understand how the stem specifications work, it's important to understand the phonetic modifications done
by combine_stem_ending(). In general, the complexities of predictable prefix, stem and ending modifications are all
handled in this function. In particular:
1. Spelling-based modifications (c/z, g/gu, gu/gü, g/j) occur automatically as appropriate for the ending.
2. If the stem begins with an acute accent, the accent is moved onto the last vowel of the prefix (for handling verbs
in -uar such as [[minguar]], pres_3s 'míngua').
3. If the ending begins with a double asterisk, this is a signal to conditionally delete the accent on the last letter
of the stem. "Conditionally" means we don't do it if the last two letters would form a diphthong without the accent
on the second one (e.g. in [[sair]], with stem 'saí'); but as an exception, we do delete the accent in stems
ending in -guí, -quí (e.g. in [[conseguir]]) because in this case the ui isn't a diphthong.
4. If the ending begins with an asterisk, this is a signal to delete the accent on the last letter of the stem, e.g.
fizé -> fizermos. Unlike for **, this removal is unconditional, so we get e.g. 'sairmos' not #'saírmos'.
5. If ending begins with i, it must get an accent after an unstressed vowel (in some but not all cases) to prevent the
two merging into a diphthong. See combine_stem_ending() for specifics.
The following stems are recognized:
-- pres_unstressed: The present indicative unstressed stem (1p, 2p). Also controls the imperative 2p
and gerund. Defaults to the infinitive stem (minus the ending -ar/-er/-ir/-or).
-- pres_stressed: The present indicative stressed stem (1s, 2s, 3s, 3p). Also controls the imperative 2s.
Default is empty if indicator `no_pres_stressed`, else a vowel alternation if such an indicator is given
(e.g. `ue`, `ì`), else the infinitive stem.
-- pres1_and_sub: Overriding stem for 1s present indicative and the entire subjunctive. Only set by irregular verbs
and by the indicators `no_pres_stressed` (e.g. [[precaver]]) and `no_pres1_and_sub` (since verbs of this sort,
e.g. [[puir]], are missing the entire subjunctive as well as the 1s present indicative). Used by many irregular
verbs, e.g. [[caber]], verbs in '-air', [[dizer]], [[ter]], [[valer]], etc. Some verbs set this and then supply an
override for the pres_1sg if it's irregular, e.g. [[saber]], with irregular subjunctive stem "saib-" and special
1s present indicative "sei".
-- pres1: Special stem for 1s present indicative. Normally, do not set this explicitly. If you need to specify an
irregular 1s present indicative, use the form override pres_1s= to specify the entire form. Defaults to
pres1_and_sub if given, else pres_stressed.
-- pres_sub_unstressed: The present subjunctive unstressed stem (1p, 2p). Defaults to pres1_and_sub if given, else the
infinitive stem.
-- pres_sub_stressed: The present subjunctive stressed stem (1s, 2s, 3s, 1p). Defaults to pres1.
-- sub_conj: Determines the set of endings used in the subjunctive. Should be one of "ar" or "er".
-- impf: The imperfect stem (not including the -av-/-i- stem suffix, which is determined by the conjugation). Defaults
to the infinitive stem.
-- full_impf: The full imperfect stem missing only the endings (-a, -as, -am, etc.). Used for verbs with irregular
imperfects such as [[ser]], [[ter]], [[vir]] and [[pôr]]. Overrides must be supplied for the impf_1p and impf_2p
due to these forms having an accent on the stem.
-- pret_base: The preterite stem (not including the -a-/-e-/-i- stem suffix). Defaults to the infinitive stem.
-- pret: The full preterite stem missing only the endings (-ste, -mos, etc.). Used for verbs with irregular preterites
(pret_conj == "irreg") such as [[fazer]], [[poder]], [[trazer]], etc. Overrides must be supplied for the pret_1s
and pret_3s. Defaults to `pret_base` + the accented conjugation vowel.
-- pret_conj: Determines the set of endings used in the preterite. Should be one of "ar", "er", "ir" or "irreg".
Defaults to the conjugation as determined from the infinitive. When pret_conj == "irreg", stem `pret` is used,
otherwise `pret_base`.
-- fut: The future stem. Defaults to the infinitive stem + the unaccented conjugation vowel.
-- cond: The conditional stem. Defaults to `fut`.
-- impf_sub: The imperfect subjunctive stem. Defaults to `pret`.
-- fut_sub: The future subjunctive stem. Defaults to `pret`.
-- plup: The pluperfect stem. Defaults to `pret`.
-- pers_inf: The personal infinitive stem. Defaults to the infinitive stem + the accented conjugation vowel.
-- pp: The masculine singular past participle. Default is based on the verb conjugation: infinitive stem + "ado" for
-ar verbs, otherwise infinitive stem + "ido".
-- short_pp: The short masculine singular past participle, for verbs with such a form. No default.
-- pp_inv: True if the past participle exists only in the masculine singular.
]=]
local built_in_conjugations = {
--------------------------------------------------------------------------------------------
-- -ar --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- (1) Verbs with short past participles: need to specify the short pp explicitly.
--
-- aceitar: use <short_pp:aceito[Brazil],aceite[Portugal]>
-- anexar, completar, expressar, expulsar, findar, fritar, ganhar, gastar, limpar, pagar, pasmar, pegar, soltar:
-- use <short_pp:anexo> etc.
-- assentar: use <short_pp:assente>
-- entregar: use <short_pp:entregue>
-- enxugar: use <short_pp:enxuto>
-- matar: use <short_pp:morto>
--
-- (2) Verbs with orthographic consonant alternations: handled automatically.
--
-- -car (brincar, buscar, pecar, trancar, etc.): automatically handled in combine_stem_ending()
-- -çar (alcançar, começar, laçar): automatically handled in combine_stem_ending()
-- -gar (apagar, cegar, esmagar, largar, navegar, resmungar, sugar, etc.): automatically handled in combine_stem_ending()
--
-- (3) Verbs with vowel alternations: need to specify the alternation explicitly unless it always happens, in
-- which case it's handled automatically through an entry below.
--
-- esmiuçar changing to esmiúço: use <ú>
-- faiscar changing to faísco: use <í>
-- -iar changing to -eio (ansiar, incendiar, mediar, odiar, remediar, etc.): use <ei>
-- -izar changing to -ízo (ajuizar, enraizar, homogeneizar, plebeizar, etc.): use <í>
-- mobiliar changing to mobílio: use <í>
-- reusar changing to reúso: use <ú>
-- saudar changing to saúdo: use <ú>
-- tuitar/retuitar changing to (re)tuíto: use <í>
{
-- dar, desdar
match = match_against_verbs("dar", {"^", "^des", "^re"}),
forms = {
pres_1s = "dou",
pres_2s = "dás",
pres_3s = "dá",
-- damos, dais regular
pres_3p = "dão",
pret = "dé", pret_conj = "irreg", pret_1s = "dei", pret_3s = "deu",
pres_sub_1s = "dê",
pres_sub_2s = "dês",
pres_sub_3s = "dê",
pres_sub_1p = {"demos", "dêmos"},
-- deis regular
pres_sub_3p = {"deem", VAR_SUPERSEDED .. "dêem"},
irreg = true,
}
},
{
-- -ear (frear, nomear, semear, etc.)
match = "ear",
forms = {
pres_stressed = "ei",
e_ei_cat = true,
}
},
{
-- estar
match = match_against_verbs("estar", {"^", "sob", "sobr"}),
forms = {
pres_1s = "estou",
pres_2s = "estás",
pres_3s = "está",
-- FIXME, estámos is claimed as an alternative pres_1p in the old conjugation data, but I believe this is garbage
pres_3p = "estão",
pres1_and_sub = "estej", -- only for subjunctive as we override pres_1s
sub_conj = "er",
pret = "estivé", pret_conj = "irreg", pret_1s = "estive", pret_3s = "esteve",
-- [[sobestar]], [[sobrestar]] are transitive so they have fully inflected past participles
pp_inv = function(base, prefix) return prefix == "" end,
irreg = true,
}
},
{
-- It appears that only [[resfolegar]] has proparoxytone forms, not [[folegar]] or [[tresfolegar]].
match = "^resfolegar",
forms = {
pres_stressed = {"resfóleg", "resfoleg"},
irreg = true,
}
},
{
-- aguar/desaguar/enxaguar, ambiguar/apaziguar/averiguar, minguar, cheguar?? (obsolete variant of [[chegar]])
match = "guar",
forms = {
-- combine_stem_ending() will move the acute accent backwards so it sits after the last vowel in [[minguar]]
pres_stressed = {{form = AC .. "gu", footnotes = {"[Brazilian Portuguese]"}}, {form = "gu", footnotes = {"[European Portuguese]"}}},
pres_sub_stressed = {
{form = AC .. "gu", footnotes = {"[Brazilian Portuguese]"}},
{form = "gu", footnotes = {"[European Portuguese]"}},
{form = AC .. VAR_SUPERSEDED .. "gü", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "gú", footnotes = {"[European Portuguese]"}},
},
pres_sub_unstressed = {"gu", {form = VAR_SUPERSEDED .. "gü", footnotes = {"[Brazilian Portuguese]"}}},
pret_1s = {"guei", {form = VAR_SUPERSEDED .. "güei", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- adequar/readequar, antiquar/obliquar, apropinquar
match = "quar",
forms = {
-- combine_stem_ending() will move the acute accent backwards so it sits after the last vowel in [[apropinquar]]
pres_stressed = {{form = AC .. "qu", footnotes = {"[Brazilian Portuguese]"}}, {form = "qu", footnotes = {"[European Portuguese]"}}},
pres_sub_stressed = {
{form = AC .. "qu", footnotes = {"[Brazilian Portuguese]"}},
{form = "qu", footnotes = {"[European Portuguese]"}},
{form = AC .. VAR_SUPERSEDED .. "qü", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "qú", footnotes = {"[European Portuguese]"}},
},
pres_sub_unstressed = {"qu", {form = VAR_SUPERSEDED .. "qü", footnotes = {"[Brazilian Portuguese]"}}},
pret_1s = {"quei", {form = VAR_SUPERSEDED .. "qüei", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- -oar (abençoar, coroar, enjoar, perdoar, etc.)
match = "oar",
forms = {
pres_1s = {"oo", VAR_SUPERSEDED .. "ôo"},
}
},
{
-- -oiar (apoiar, boiar)
match = "oiar",
forms = {
pres_stressed = {"oi", {form = VAR_SUPERSEDED .. "ói", footnotes = {"[Brazilian Portuguese]"}}},
}
},
{
-- parar
match = "^parar",
forms = {
pres_3s = {"para", VAR_SUPERSEDED .. "pára"},
}
},
{
-- pelar
match = "^pelar",
forms = {
pres_1s = {"pelo", VAR_SUPERSEDED .. "pélo"},
pres_2s = {"pelas", VAR_SUPERSEDED .. "pélas"},
pres_3s = {"pela", VAR_SUPERSEDED .. "péla"},
}
},
--------------------------------------------------------------------------------------------
-- -er --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- precaver: use <no_pres_stressed>
-- -cer (verbs in -ecer, descer, vencer, etc.): automatically handled in combine_stem_ending()
-- -ger (proteger, reger, etc.): automatically handled in combine_stem_ending()
-- -guer (erguer/reerguer/soerguer): automatically handled in combine_stem_ending()
{
-- benzer
match = "benzer",
forms = {short_pp = "bento"}
},
{
-- caber
match = "caber",
forms = {
pres1_and_sub = "caib",
pret = "coubé", pret_1s = "coube", pret_3s = "coube", pret_conj = "irreg",
irreg = true,
}
},
{
-- crer, descrer
match = "crer",
forms = {
pres_2s = "crês", pres_3s = "crê",
pres_2p = "credes", pres_3p = {"creem", VAR_SUPERSEDED .. "crêem"},
pres1_and_sub = "crei",
irreg = true,
}
},
{
-- dizer, bendizer, condizer, contradizer, desdizer, maldizer, predizer, etc.
match = "dizer",
forms = {
-- use 'digu' because we're in a front environment; if we use 'dig', we'll get '#dijo'
pres1_and_sub = "digu", pres_3s = "diz",
pret = "dissé", pret_conj = "irreg", pret_1s = "disse", pret_3s = "disse", pp = "dito",
fut = "dir",
imp_2s = {"diz", "dize"}, -- per Infopédia
irreg = true,
}
},
{
-- eleger, reeleger
match = "eleger",
forms = {short_pp = "eleito"}
},
{
-- acender, prender; not desprender, etc.
match = match_against_verbs("ender", {"^ac", "^pr"}),
forms = {short_pp = "eso"}
},
{
-- fazer, afazer, contrafazer, desfazer, liquefazer, perfazer, putrefazer, rarefazer, refazer, satisfazer, tumefazer
match = "fazer",
forms = {
pres1_and_sub = "faç", pres_3s = "faz",
pret = "fizé", pret_conj = "irreg", pret_1s = "fiz", pret_3s = "fez", pp = "feito",
fut = "far",
imp_2s = {"faz", {form = "faze", footnotes = {"[Brazil only]"}}}, -- per Priberam
irreg = true,
}
},
{
match = "^haver",
forms = {
pres_1s = "hei",
pres_2s = "hás",
pres_3s = "há",
pres_1p = {"havemos", "hemos"},
pres_2p = {"haveis", "heis"},
pres_3p = "hão",
pres1_and_sub = "haj", -- only for subjunctive as we override pres_1s
pret = "houvé", pret_conj = "irreg", pret_1s = "houve", pret_3s = "houve",
imp_2p = "havei",
irreg = true,
}
},
-- reaver below under r-
{
-- jazer, adjazer
match = "jazer",
forms = {
pres_3s = "jaz",
imp_2s = {"jaz", "jaze"}, -- per Infopédia
irreg = true,
}
},
{
-- ler, reler, tresler; not excel(l)er, valer, etc.
match = match_against_verbs("ler", {"^", "^re", "tres"}),
forms = {
pres_2s = "lês", pres_3s = "lê",
pres_2p = "ledes", pres_3p = {"leem", VAR_SUPERSEDED .. "lêem"},
pres1_and_sub = "lei",
irreg = true,
}
},
{
-- morrer, desmorrer
match = "morrer",
forms = {short_pp = "morto"}
},
{
-- doer, moer/remoer, roer/corroer, soer
match = "oer",
forms = {
pres_1s = function(base, prefix)
return prefix ~= "s" and {"oo", VAR_SUPERSEDED .. "ôo"} or nil
end, pres_2s = "óis", pres_3s = "ói",
-- impf -ía etc., pret_1s -oí and pp -oído handled automatically in combine_stem_ending()
only3sp = function(base, prefix) return prefix == "d" end,
no_pres1_and_sub = function(base, prefix) return prefix == "s" end,
irreg = true,
}
},
{
-- perder
match = "perder",
forms = {
-- use 'perqu' because we're in a front environment; if we use 'perc', we'll get '#perço'
pres1_and_sub = "perqu",
irreg = true,
}
},
{
-- poder
match = "poder",
forms = {
pres1_and_sub = "poss",
pret = "pudé", pret_1s = "pude", pret_3s = "pôde", pret_conj = "irreg",
irreg = true,
}
},
{
-- prazer, aprazer, comprazer, desprazer
match = "prazer",
forms = {
pres_3s = "praz",
pret = "prouvé", pret_1s = "prouve", pret_3s = "prouve", pret_conj = "irreg",
only3sp = function(base, prefix) return not prefix:find("com$") end,
irreg = true,
}
},
-- prover below, just below ver
{
-- requerer; must precede querer
match = "requerer",
forms = {
-- old module claims alt pres_3s 'requere'; not in Priberam, Infopédia or conjugacao.com.br
pres_3s = "requer",
pres1_and_sub = "requeir",
imp_2s = {{form = "requere", footnotes = {"[Brazil only]"}}, "requer"}, -- per Priberam
-- regular preterite, unlike [[querer]]
irreg = true,
}
},
{
-- querer, desquerer, malquerer
match = "querer",
forms = {
-- old module claims alt pres_3s 'quere'; not in Priberam, Infopédia or conjugacao.com.br
pres_1s = "quero", pres_3s = "quer",
pres1_and_sub = "queir", -- only for subjunctive as we override pres_1s
pret = "quisé", pret_1s = "quis", pret_3s = "quis", pret_conj = "irreg",
imp_2s = {{form = "quere", footnotes = {"[Brazil only]"}}, {form = "quer", footnotes = {"[Brazil only]"}}}, -- per Priberam
irreg = true,
}
},
{
match = "reaver",
forms = {
no_pres_stressed = true,
pret = "reouvé", pret_conj = "irreg", pret_1s = "reouve", pret_3s = "reouve",
irreg = true,
}
},
{
-- saber, ressaber
match = "saber",
forms = {
pres_1s = "sei",
pres1_and_sub = "saib", -- only for subjunctive as we override pres_1s
pret = "soubé", pret_1s = "soube", pret_3s = "soube", pret_conj = "irreg",
irreg = true,
}
},
{
-- escrever/reescrever, circunscrever, descrever/redescrever, inscrever, prescrever, proscrever, subscrever,
-- transcrever, others?
match = "screver",
forms = {
pp = "scrito",
irreg = true,
}
},
{
-- suspender
match = "suspender",
forms = {short_pp = "suspenso"}
},
{
match = "^ser",
forms = {
pres_1s = "sou", pres_2s = "és", pres_3s = "é",
pres_1p = "somos", pres_2p = "sois", pres_3p = "são",
pres1_and_sub = "sej", -- only for subjunctive as we override pres_1s
full_impf = "er", impf_1p = "éramos", impf_2p = "éreis",
pret = "fô", pret_1s = "fui", pret_3s = "foi", pret_conj = "irreg",
imp_2s = "sê", imp_2p = "sede",
pp_inv = true,
irreg = true,
}
},
{
-- We want to match abster, conter, deter, etc. but not abater, cometer, etc. No way to avoid listing each verb.
match = match_against_verbs("ter", {"abs", "^a", "con", "de", "entre", "man", "ob", "^re", "sus", "^"}),
forms = {
pres_2s = function(base, prefix) return prefix == "" and "tens" or "téns" end,
pres_3s = function(base, prefix) return prefix == "" and "tem" or "tém" end,
pres_2p = "tendes", pres_3p = "têm",
pres1_and_sub = "tenh",
full_impf = "tinh", impf_1p = "tínhamos", impf_2p = "tínheis",
pret = "tivé", pret_1s = "tive", pret_3s = "teve", pret_conj = "irreg",
irreg = true,
}
},
{
match = "trazer",
forms = {
-- use 'tragu' because we're in a front environment; if we use 'trag', we'll get '#trajo'
pres1_and_sub = "tragu", pres_3s = "traz",
pret = "trouxé", pret_1s = "trouxe", pret_3s = "trouxe", pret_conj = "irreg",
fut = "trar",
irreg = true,
}
},
{
-- valer, desvaler, equivaler
match = "valer",
forms = {
pres1_and_sub = "valh",
irreg = true,
}
},
{
-- coerir, incoerir
--FIXME: This should be a part of the <i-e> section. It's an "i-e", but with accents to prevent a diphthong when it gets stressed.
match = "coerir",
forms = {
vowel_alt = "i-e",
pres1_and_sub = "coír",
pres_sub_unstressed = "coir",
}
},
{
-- We want to match antever etc. but not absolver, atrever etc. No way to avoid listing each verb.
match = match_against_verbs("ver", {"ante", "entre", "pre", "^re", "^"}),
forms = {
pres_2s = "vês", pres_3s = "vê",
pres_2p = "vedes", pres_3p = {"veem", VAR_SUPERSEDED .. "vêem"},
pres1_and_sub = "vej",
pret = "ví", pret_1s = "vi", pret_3s = "viu", pret_conj = "irreg",
pp = "visto",
irreg = true,
}
},
{
-- [[prover]] and [[desprover]] have regular preterite and past participle
match = "prover",
forms = {
pres_2s = "provês", pres_3s = "provê",
pres_2p = "provedes", pres_3p = {"proveem", VAR_SUPERSEDED .. "provêem"},
pres1_and_sub = "provej",
irreg = true,
}
},
{
-- Only envolver, revolver. Not volver, desenvolver, devolver, evolver, etc.
match = match_against_verbs("volver", {"^en", "^re"}),
forms = {short_pp = "volto"},
},
--------------------------------------------------------------------------------------------
-- -ir --
--------------------------------------------------------------------------------------------
-- Verbs not needing entries here:
--
-- abolir: per Priberam: <no_pres1_and_sub> for Brazil, use <u-o> for Portugal
-- barrir: use <only3sp>
-- carpir, colorir, demolir: use <no_pres1_and_sub>
-- descolorir: per Priberam: <no_pres_stressed> for Brazil, use <no_pres1_and_sub> for Portugal
-- delir, espavorir, falir, florir, remir, renhir: use <no_pres_stressed>
-- empedernir: per Priberam: <no_pres_stressed> for Brazil, use <i-e> for Portugal
-- transir: per Priberam: <no_pres_stressed> for Brazil, regular for Portugal
-- aspergir, despir, flectir/deflectir/genuflectir/genufletir/reflectir/refletir, mentir/desmentir,
-- sentir/assentir/consentir/dissentir/pressentir/ressentir, convergir/divergir, aderir/adherir,
-- ferir/auferir/conferir/deferir/desferir/diferir/differir/inferir/interferir/preferir/proferir/referir/transferir,
-- gerir/digerir/ingerir/sugerir, preterir, competir/repetir, servir, advertir/animadvertir/divertir,
-- vestir/investir/revestir/travestir, seguir/conseguir/desconseguir/desseguir/perseguir/prosseguir: use <i-e>
-- inerir: use <i-e> (per Infopédia, and per Priberam for Brazil), use <i-e.only3sp> (per Priberam for Portugal)
-- compelir/expelir/impelir/repelir: per Priberam: use <i-e> for Brazil, <no_pres1_and_sub> for Portugal (Infopédia
-- says <i-e>); NOTE: old module claims short_pp 'repulso' but none of Priberam, Infopédia and conjugacao.com.br agree
-- dormir, engolir, tossir, subir, acudir/sacudir, fugir, sumir/consumir (NOT assumir/presumir/resumir): use <u-o>
-- polir/repolir (claimed in old module to have no pres stressed, but Priberam disagrees for both Brazil and
-- Portugal; Infopédia lists repolir as completely regular and not like polir, but I think that's an error): use
-- <u>
-- premir: per Priberam: use <no_pres1_and_sub> for Brazil, <i> for Portugal (for Portugal, Priberam says
-- primo/primes/prime, while Infopédia says primo/premes/preme; Priberam is probably more reliable)
-- extorquir/retorquir use <no_pres1_and_sub> for Brazil, <u-o,u> for Portugal
-- agredir/progredir/regredir/transgredir: use <i>
-- denegrir, prevenir: use <i>
-- eclodir: per Priberam: regular in Brazil, <u-o.only3sp> in Portugal (Infopédia says regular)
-- cerzir: per Priberam: use <i> for Brazil, use <i-e> for Portugal (Infopédia says <i-e,i>)
-- cergir: per Priberam: use <i-e> for Brazil, no conjugation given for Portugal (Infopédia says <i-e>)
-- proibir/coibir: use <í>
-- reunir: use <ú>
-- parir/malparir: use <no_pres_stressed> (old module had pres_1s = {paro (1_defective), pairo (1_obsolete_alt)},
-- pres_2s = pares, pres_3s = pare, and subjunctive stem par- or pair-, but both Priberam and Infopédia agree
-- in these verbs being no_pres_stressed)
-- explodir/implodir: use <u-o> (claimed in old module to be <+,u-o> but neither Priberam nor Infopédia agree)
--
-- -cir alternations (aducir, ressarcir): automatically handled in combine_stem_ending()
-- -gir alternations (agir, dirigir, exigir): automatically handled in combine_stem_ending()
-- -guir alternations (e.g. conseguir): automatically handled in combine_stem_ending()
-- -quir alternations (e.g. extorquir): automatically handled in combine_stem_ending()
{
-- verbs in -air (cair, sair, trair and derivatives: decair/descair/recair, sobres(s)air,
-- abstrair/atrair/contrair/distrair/extrair/protrair/retrair/subtrair)
match = "air",
forms = {
pres1_and_sub = "ai", pres_2s = "ais", pres_3s = "ai",
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- abrir/desabrir/reabrir
match = "abrir",
forms = {pp = "aberto"}
},
{
-- cobrir/descobrir/encobrir/recobrir/redescobrir
match = "cobrir",
forms = {vowel_alt = "u-o", pp = "coberto"}
},
{
-- conduzir, produzir, reduzir, traduzir, etc.; luzir, reluzir, tremeluzir
match = "uzir",
forms = {
pres_3s = "uz",
imp_2s = {"uz", "uze"}, -- per Infopédia
irreg = true,
}
},
{
-- pedir, desimpedir, despedir, espedir, expedir, impedir
-- medir
-- comedir (per Priberam, no_pres_stressed in Brazil)
match = match_against_verbs("edir", {"m", "p"}),
forms = {
pres1_and_sub = "eç",
irreg = true,
}
},
{
-- frigir
match = "frigir",
forms = {vowel_alt = "i-e", short_pp = "frito"},
},
{
-- inserir
match = "inserir",
forms = {vowel_alt = "i-e", short_pp = {form = "inserto", footnotes = {"[European Portuguese only]"}}},
},
{
-- ir
match = "^ir",
forms = {
pres_1s = "vou", pres_2s = "vais", pres_3s = "vai",
pres_1p = "vamos", pres_2p = "ides", pres_3p = "vão",
pres_sub_1s = "vá", pres_sub_2s = "vás", pres_sub_3s = "vá",
pres_sub_1p = "vamos", pres_sub_2p = "vades", pres_sub_3p = "vão",
pret = "fô", pret_1s = "fui", pret_3s = "foi", pret_conj = "irreg",
irreg = true,
}
},
{
-- emergir, imergir, submergir
match = "mergir",
forms = {vowel_alt = {"i-e", "+"}, short_pp = "merso"},
},
{
match = "ouvir",
forms = {
pres1_and_sub = {"ouç", "oiç"},
irreg = true,
}
},
{
-- exprimir, imprimir, comprimir (but not descomprimir per Priberam), deprimir, oprimir/opprimir (but not reprimir,
-- suprimir/supprimir per Priberam)
match = match_against_verbs("primir", {"^com", "ex", "im", "de", "^o", "op"}),
forms = {short_pp = "presso"}
},
{
-- rir, sorrir
match = match_against_verbs("rir", {"^", "sor"}),
forms = {
pres_2s = "ris", pres_3s = "ri", pres_2p = "rides", pres_3p = "riem",
pres1_and_sub = "ri",
irreg = true,
}
},
{
-- distinguir, extinguir
match = "tinguir",
forms = {
short_pp = "tinto",
-- gu/g alternations handled in combine_stem_ending()
}
},
{
-- delinquir, arguir/redarguir
-- NOTE: The following is based on delinquir, with arguir/redarguir by parallelism.
-- In Priberam, delinquir and arguir are exactly parallel, but in Infopédia they aren't; only delinquir has
-- alternatives like 'delínques'. I assume this is because forms like 'delínques' are Brazilian and
-- Infopédia is from Portugal, so their coverage of Brazilian forms may be inconsistent.
match = match_against_verbs("uir", {"delinq", "arg"}),
forms = {
-- use 'ü' because we're in a front environment; if we use 'u', we'll get '#delinco', '#argo'
pres1_and_sub = {{form = AC .. "ü", footnotes = {"[Brazilian Portuguese]"}}, {form = "ü", footnotes = {"[European Portuguese]"}}},
-- FIXME: verify. This is by partial parallelism with the present subjunctive of verbs in -quar (also a
-- front environment). Infopédia has 'delinquis ou delínques' and Priberam has 'delinqúis'.
pres_2s = {
{form = AC .. "ues", footnotes = {"[Brazilian Portuguese]"}},
{form = "uis", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "ües", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úis", footnotes = {"[European Portuguese]"}},
},
-- Same as previous.
pres_3s = {
{form = AC .. "ue", footnotes = {"[Brazilian Portuguese]"}},
{form = "ui", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "üe", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úi", footnotes = {"[European Portuguese]"}},
},
-- Infopédia has 'delinquem ou delínquem' and Priberam has 'delinqúem'.
pres_3p = {
{form = AC .. "uem", footnotes = {"[Brazilian Portuguese]"}},
{form = "uem", footnotes = {"[European Portuguese]"}},
-- This form should occur only with an infinitive 'delinqüir' etc.
-- {form = AC .. VAR_SUPERSEDED .. "üem", footnotes = {"[Brazilian Portuguese]"}},
{form = VAR_SUPERSEDED .. "úem", footnotes = {"[European Portuguese]"}},
},
-- FIXME: The old module also had several other alternative forms (given as [123]_alt, not identified as
-- obsolete):
-- impf: delinquia/delinquía, delinquias/delinquías, delinquia/delinquía, delinquíamos, delinquíeis, delinquiam/delinquíam
-- plup: delinquira/delinquíra, delinquiras/delinquíras, delinquira/delinquíra, delinquíramos, delinquíreis, delinquiram/delinquíram
-- pres_1p = delinquimos/delinquímos, pres_2p = delinquis/delinquís
-- pret = delinqui/delinquí, delinquiste/delinquíste, delinquiu, delinquimos/delinquímos, delinquistes/delinquístes, delinquiram/delinquíram
-- pers_inf = delinquir, delinquires, delinquir, delinquirmos, delinquirdes, delinquirem/delinquírem
-- fut_sub = delinquir, delinquires, delinquir, delinquirmos, delinquirdes, delinquirem/delinquírem
--
-- None of these alternative forms can be found in the Infopédia, Priberam, Collins or Reverso conjugation
-- tables, so their status is unclear, and I have omitted them.
}
},
{
-- verbs in -truir (construir, destruir, reconstruir) but not obstruir/desobstruir, instruir, which are handled
-- by the default -uir handler below
match = match_against_verbs("struir", {"con", "de"}),
forms = {
pres_2s = {"stróis", "struis"}, pres_3s = {"strói", "strui"}, pres_3p = {"stroem", "struem"},
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- verbs in -cluir (concluir, excluir, incluir): like -uir but has short_pp concluso etc. in Brazil
match = "cluir",
forms = {
pres_2s = "cluis", pres_3s = "clui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
short_pp = {form = "cluso", footnotes = {"[Brazil only]"}},
irreg = true,
}
},
{
-- puir, ruir: like -uir but defective in pres_1s, all pres sub
match = match_against_verbs("uir", {"^p", "^r"}),
forms = {
pres_2s = "uis", pres_3s = "ui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
no_pres1_and_sub = true,
irreg = true,
}
},
{
-- remaining verbs in -uir (concluir/excluir/incluir/concruir/concruyr, abluir/diluir, afluir/fluir/influir,
-- aluir, anuir, atribuir/contribuir/distribuir/redistribuir/retribuir/substituir, coevoluir/evoluir,
-- constituir/destituir/instituir/reconstituir/restituir, derruir, diminuir, estatuir, fruir/usufruir, imbuir,
-- imiscuir, poluir, possuir, pruir
-- FIXME: old module lists short pp incluso for incluir that can't be verified, ask about this
-- FIXME: handle -uyr verbs?
match = function(verb)
-- Don't match -guir verbs (e.g. [[seguir]], [[conseguir]]) or -quir verbs (e.g. [[extorquir]])
if verb:find("guir$") or verb:find("quir$") then
return nil
else
return match_against_verbs("uir", {""})(verb)
end
end,
forms = {
pres_2s = "uis", pres_3s = "ui",
-- all occurrences of accented í in endings handled in combine_stem_ending()
irreg = true,
}
},
{
-- We want to match advir, convir, devir, etc. but not ouvir, servir, etc. No way to avoid listing each verb.
match = match_against_verbs("vir", {"ad", "^a", "con", "contra", "de", "^desa", "inter", "pro", "^re", "sobre", "^"}),
forms = {
pres_2s = function(base, prefix) return prefix == "" and "vens" or "véns" end,
pres_3s = function(base, prefix) return prefix == "" and "vem" or "vém" end,
pres_2p = "vindes", pres_3p = "vêm",
pres1_and_sub = "venh",
full_impf = "vinh", impf_1p = "vínhamos", impf_2p = "vínheis",
pret = "vié", pret_1s = "vim", pret_3s = "veio", pret_conj = "irreg",
pp = "vindo",
irreg = true,
}
},
--------------------------------------------------------------------------------------------
-- misc --
--------------------------------------------------------------------------------------------
{
-- pôr, antepor, apor, compor/decompor/descompor, contrapor, depor, dispor, expor, impor, interpor, justapor,
-- opor, pospor, propor, repor, sobrepor, supor/pressupor, transpor, superseded forms like [[decompôr]], others?
match = "p[oô]r",
forms = {
pres1_and_sub = "ponh",
pres_2s = "pões", pres_3s = "põe", pres_1p = "pomos", pres_2p = "pondes", pres_3p = "põem",
full_impf = "punh", impf_1p = "púnhamos", impf_2p = "púnheis",
pret = "pusé", pret_1s = "pus", pret_3s = "pôs", pret_conj = "irreg",
pers_inf = "po",
gerund = "pondo", pp = "posto",
irreg = true,
}
},
}
local function skip_slot(base, slot, allow_overrides)
if not allow_overrides and (base.basic_overrides[slot] or
base.refl and base.basic_reflexive_only_overrides[slot]) then
-- Skip any slots for which there are overrides.
return true
end
if base.only3s and (slot:find("^pp_f") or slot:find("^pp_mp")) then
-- diluviar, atardecer, neviscar; impersonal verbs have only masc sing pp
return true
end
if not slot:find("[123]") then
-- Don't skip non-personal slots.
return false
end
if base.nofinite then
return true
end
if (base.only3s or base.only3sp or base.only3p) and (slot:find("^imp_") or slot:find("^neg_imp_")) then
return true
end
if base.only3s and not slot:find("3s") then
-- diluviar, atardecer, neviscar
return true
end
if base.only3sp and not slot:find("3[sp]") then
-- atañer, concernir
return true
end
if base.only3p and not slot:find("3p") then
-- [[caer cuatro gotas]], [[caer chuzos de punta]], [[entrarle los siete males]]
return true
end
return false
end
-- Apply vowel alternations to stem.
local function apply_vowel_alternations(stem, alternations)
local alternation_stems = {}
local saw_pres1_and_sub = false
local saw_pres_stressed = false
-- Process alternations other than +.
for _, altobj in ipairs(alternations) do
local alt = altobj.form
local pres1_and_sub, pres_stressed, err
-- Treat final -gu, -qu as a consonant, so the previous vowel can alternate (e.g. conseguir -> consigo).
-- This means a verb in -guar can't have a u-ú alternation but I don't think there are any verbs like that.
stem = rsub(stem, "([gq])u$", "%1" .. TEMPC1)
if alt == "+" then
-- do nothing yet
elseif alt == "ei" then
local before_last_vowel = rmatch(stem, "^(.*)i$")
if not before_last_vowel then
err = "stem should end in -i"
else
pres1_and_sub = nil
pres_stressed = before_last_vowel .. "ei"
end
else
local before_last_vowel, last_vowel, after_last_vowel = rmatch(stem, "^(.*)(" .. V .. ")(.-[ui])$")
if not before_last_vowel then
before_last_vowel, last_vowel, after_last_vowel = rmatch(stem, "^(.*)(" .. V .. ")(.-)$")
end
if alt == "i-e" then
if last_vowel == "e" or last_vowel == "i" then
pres1_and_sub = before_last_vowel .. "i" .. after_last_vowel
if last_vowel == "i" then
pres_stressed = before_last_vowel .. "e" .. after_last_vowel
end
else
err = "should have -e- or -i- as the last vowel"
end
elseif alt == "i" then
if last_vowel == "e" then
pres1_and_sub = before_last_vowel .. "i" .. after_last_vowel
pres_stressed = pres1_and_sub
else
err = "should have -e- as the last vowel"
end
elseif alt == "u-o" then
if last_vowel == "o" or last_vowel == "u" then
pres1_and_sub = before_last_vowel .. "u" .. after_last_vowel
if last_vowel == "u" then
pres_stressed = before_last_vowel .. "o" .. after_last_vowel
end
else
err = "should have -o- or -u- as the last vowel"
end
elseif alt == "u" then
if last_vowel == "o" then
pres1_and_sub = before_last_vowel .. "u" .. after_last_vowel
pres_stressed = pres1_and_sub
else
err = "should have -o- as the last vowel"
end
elseif alt == "í" then
if last_vowel == "i" then
pres_stressed = before_last_vowel .. "í" .. after_last_vowel
else
err = "should have -i- as the last vowel"
end
elseif alt == "ú" then
if last_vowel == "u" then
pres_stressed = before_last_vowel .. "ú" .. after_last_vowel
else
err = "should have -u- as the last vowel"
end
else
error("Internal error: Unrecognized vowel alternation '" .. alt .. "'")
end
end
if pres1_and_sub then
pres1_and_sub = {form = pres1_and_sub:gsub(TEMPC1, "u"), footnotes = altobj.footnotes}
saw_pres1_and_sub = true
end
if pres_stressed then
pres_stressed = {form = pres_stressed:gsub(TEMPC1, "u"), footnotes = altobj.footnotes}
saw_pres_stressed = true
end
table.insert(alternation_stems, {
altobj = altobj,
pres1_and_sub = pres1_and_sub,
pres_stressed = pres_stressed,
err = err
})
end
-- Now do +. We check to see which stems are used by other alternations and specify those so any footnotes are
-- properly attached.
for _, alternation_stem in ipairs(alternation_stems) do
if alternation_stem.altobj.form == "+" then
local stemobj = {form = stem, footnotes = alternation_stem.altobj.footnotes}
alternation_stem.pres1_and_sub = saw_pres1_and_sub and stemobj or nil
alternation_stem.pres_stressed = saw_pres_stressed and stemobj or nil
end
end
return alternation_stems
end
-- Add the `stem` to the `ending` for the given `slot` and apply any phonetic modifications.
-- WARNING: This function is written very carefully; changes to it can easily have unintended consequences.
local function combine_stem_ending(base, slot, prefix, stem, ending, dont_include_prefix)
-- If the stem begins with an acute accent, this is a signal to move the accent onto the last vowel of the prefix.
-- Cf. míngua of minguar.
if stem:find("^" .. AC) then
stem = rsub(stem, "^" .. AC, "")
if dont_include_prefix then
error("Internal error: Can't handle acute accent at beginning of stem if dont_include_prefix is given")
end
prefix = rsub(prefix, "([aeiouyAEIOUY])([^aeiouyAEIOUY]*)$", "%1" .. AC .. "%2")
end
-- Use the full stem for checking for -gui ending and such, because 'stem' is just 'u' for [[arguir]],
-- [[delinquir]].
local full_stem = prefix .. stem
-- Include the prefix in the stem unless dont_include_prefix is given (used for the past participle stem).
if not dont_include_prefix then
stem = prefix .. stem
end
-- If the ending begins with a double asterisk, this is a signal to conditionally delete the accent on the last letter
-- of the stem. "Conditionally" means we don't do it if the last two letters would form a diphthong without the accent
-- on the second one (e.g. in [[sair]], with stem 'saí'); but as an exception, we do delete the accent in stems
-- ending in -guí, -quí (e.g. in [[conseguir]]) because in this case the ui isn't a diphthong.
if ending:find("^%*%*") then
ending = rsub(ending, "^%*%*", "")
if rfind(full_stem, "[gq]uí$") or not rfind(full_stem, V .. "[íú]$") then
stem = remove_final_accent(stem)
end
end
-- If the ending begins with an asterisk, this is a signal to delete the accent on the last letter of the stem.
-- E.g. fizé -> fizermos. Unlike for **, this removal is unconditional, so we get e.g. 'sairmos' not #'saírmos'.
if ending:find("^%*") then
ending = rsub(ending, "^%*", "")
stem = remove_final_accent(stem)
end
-- If ending begins with i, it must get an accent after an unstressed vowel (in some but not all cases) to prevent
-- the two merging into a diphthong:
-- * cair ->
-- * pres: caímos, caís;
-- * impf: all forms (caí-);
-- * pret: caí, caíste (but not caiu), caímos, caístes, caíram;
-- * plup: all forms (caír-);
-- * impf_sub: all forms (caíss-);
-- * fut_sub: caíres, caírem (but not cair, cairmos, cairdes)
-- * pp: caído (but not gerund caindo)
-- * atribuir, other verbs in -uir -> same pattern as for cair etc.
-- * roer ->
-- * pret: roí
-- * impf: all forms (roí-)
-- * pp: roído
if ending:find("^i") and full_stem:find("[aeiou]$") and not full_stem:find("[gq]u$") and ending ~= "ir" and
ending ~= "iu" and ending ~= "indo" and not ending:find("^ir[md]") then
ending = ending:gsub("^i", "í")
end
-- Spelling changes in the stem; it depends on whether the stem given is the pre-front-vowel or
-- pre-back-vowel variant, as indicated by `frontback`. We want these front-back spelling changes to happen
-- between stem and ending, not between prefix and stem; the prefix may not have the same "front/backness"
-- as the stem.
local is_front = rfind(ending, "^[eiéíê]")
if base.frontback == "front" and not is_front then
stem = stem:gsub("c$", "ç") -- conhecer -> conheço, vencer -> venço, descer -> desço
stem = stem:gsub("g$", "j") -- proteger -> protejo, fugir -> fujo
stem = stem:gsub("gu$", "g") -- distinguir -> distingo, conseguir -> consigo
stem = stem:gsub("qu$", "c") -- extorquir -> exturco
stem = stem:gsub("([gq])ü$", "%1u") -- argüir (superseded) -> arguo, delinqüir (superseded) -> delinquo
elseif base.frontback == "back" and is_front then
-- The following changes are all superseded so we don't do them:
-- averiguar -> averigüei, minguar -> mingüei; antiquar -> antiqüei, apropinquar -> apropinqüei
-- stem = stem:gsub("([gq])u$", "%1ü")
stem = stem:gsub("g$", "gu") -- cargar -> carguei, apagar -> apaguei
stem = stem:gsub("c$", "qu") -- marcar -> marquei
stem = stem:gsub("ç$", "c") -- começar -> comecei
-- j does not go to g here; desejar -> deseje not #desege
end
return stem .. ending
end
local function add3(base, slot, stems, endings, footnotes, allow_overrides)
if skip_slot(base, slot, allow_overrides) then
return
end
local function do_combine_stem_ending(stem, ending)
return combine_stem_ending(base, slot, base.prefix, stem, ending)
end
iut.add_forms(base.forms, slot, stems, endings, do_combine_stem_ending, nil, nil, footnotes)
end
local function insert_form(base, slot, form)
if not skip_slot(base, slot) then
iut.insert_form(base.forms, slot, form)
end
end
local function insert_forms(base, slot, forms)
if not skip_slot(base, slot) then
iut.insert_forms(base.forms, slot, forms)
end
end
local function add_single_stem_tense(base, slot_pref, stems, s1, s2, s3, p1, p2, p3)
local function addit(slot, ending)
add3(base, slot_pref .. "_" .. slot, stems, ending)
end
addit("1s", s1)
addit("2s", s2)
addit("3s", s3)
addit("1p", p1)
addit("2p", p2)
addit("3p", p3)
end
local function construct_stems(base, vowel_alt)
local stems = {}
stems.pres_unstressed = base.stems.pres_unstressed or base.inf_stem
stems.pres_stressed =
-- If no_pres_stressed given, pres_stressed stem should be empty so no forms are generated.
base.no_pres_stressed and {} or
base.stems.pres_stressed or
vowel_alt.pres_stressed or
base.inf_stem
stems.pres1_and_sub =
-- If no_pres_stressed given, the entire subjunctive is missing.
base.no_pres_stressed and {} or
-- If no_pres1_and_sub given, pres1 and entire subjunctive are missing.
base.no_pres1_and_sub and {} or
base.stems.pres1_and_sub or
vowel_alt.pres1_and_sub or
nil
stems.pres1 = base.stems.pres1 or stems.pres1_and_sub or stems.pres_stressed
stems.impf = base.stems.impf or base.inf_stem
stems.full_impf = base.stems.full_impf
stems.pret_base = base.stems.pret_base or base.inf_stem
stems.pret = base.stems.pret or iut.map_forms(iut.convert_to_general_list_form(stems.pret_base), function(form)
return form .. base.conj_vowel end)
stems.pret_conj = base.stems.pret_conj or base.conj
stems.fut = base.stems.fut or base.inf_stem .. base.conj
stems.cond = base.stems.cond or stems.fut
stems.pres_sub_stressed = base.stems.pres_sub_stressed or stems.pres1
stems.pres_sub_unstressed = base.stems.pres_sub_unstressed or stems.pres1_and_sub or stems.pres_unstressed
stems.sub_conj = base.stems.sub_conj or base.conj
stems.plup = base.stems.plup or stems.pret
stems.impf_sub = base.stems.impf_sub or stems.pret
stems.fut_sub = base.stems.fut_sub or stems.pret
stems.pers_inf = base.stems.pers_inf or base.inf_stem .. base.conj_vowel
stems.pp = base.stems.pp or base.conj == "ar" and
combine_stem_ending(base, "pp_ms", base.prefix, base.inf_stem, "ado", "dont include prefix") or
-- use combine_stem_ending esp. so we get roído, caído, etc.
combine_stem_ending(base, "pp_ms", base.prefix, base.inf_stem, "ido", "dont include prefix")
stems.pp_ms = stems.pp
local function masc_to_fem(form)
if rfind(form, "o$") then
return rsub(form, "o$", "a")
else
return form
end
end
stems.pp_fs = iut.map_forms(iut.convert_to_general_list_form(stems.pp_ms), masc_to_fem)
if base.stems.short_pp then
stems.short_pp_ms = base.stems.short_pp
stems.short_pp_fs = iut.map_forms(iut.convert_to_general_list_form(stems.short_pp_ms), masc_to_fem)
end
base.this_stems = stems
end
local function add_present_indic(base)
local stems = base.this_stems
local function addit(slot, stems, ending)
add3(base, "pres_" .. slot, stems, ending)
end
local s2, s3, p1, p2, p3
if base.conj == "ar" then
s2, s3, p1, p2, p3 = "as", "a", "amos", "ais", "am"
elseif base.conj == "er" or base.conj == "or" then -- verbs in -por have the present overridden
s2, s3, p1, p2, p3 = "es", "e", "emos", "eis", "em"
elseif base.conj == "ir" then
s2, s3, p1, p2, p3 = "es", "e", "imos", "is", "em"
else
error("Internal error: Unrecognized conjugation " .. base.conj)
end
addit("1s", stems.pres1, "o")
addit("2s", stems.pres_stressed, s2)
addit("3s", stems.pres_stressed, s3)
addit("1p", stems.pres_unstressed, p1)
addit("2p", stems.pres_unstressed, p2)
addit("3p", stems.pres_stressed, p3)
end
local function add_present_subj(base)
local stems = base.this_stems
local function addit(slot, stems, ending)
add3(base, "pres_sub_" .. slot, stems, ending)
end
local s1, s2, s3, p1, p2, p3
if stems.sub_conj == "ar" then
s1, s2, s3, p1, p2, p3 = "e", "es", "e", "emos", "eis", "em"
else
s1, s2, s3, p1, p2, p3 = "a", "as", "a", "amos", "ais", "am"
end
addit("1s", stems.pres_sub_stressed, s1)
addit("2s", stems.pres_sub_stressed, s2)
addit("3s", stems.pres_sub_stressed, s3)
addit("1p", stems.pres_sub_unstressed, p1)
addit("2p", stems.pres_sub_unstressed, p2)
addit("3p", stems.pres_sub_stressed, p3)
end
local function add_finite_non_present(base)
local stems = base.this_stems
local function add_tense(slot, stem, s1, s2, s3, p1, p2, p3)
add_single_stem_tense(base, slot, stem, s1, s2, s3, p1, p2, p3)
end
if stems.full_impf then
-- An override needs to be supplied for the impf_1p and impf_2p due to the written accent on the stem.
add_tense("impf", stems.full_impf, "a", "as", "a", {}, {}, "am")
elseif base.conj == "ar" then
add_tense("impf", stems.impf, "ava", "avas", "ava", "ávamos", "áveis", "avam")
else
add_tense("impf", stems.impf, "ia", "ias", "ia", "íamos", "íeis", "iam")
end
-- * at the beginning of the ending means to remove a final accent from the preterite stem.
if stems.pret_conj == "irreg" then
add_tense("pret", stems.pret, {}, "*ste", {}, "*mos", "*stes", "*ram")
elseif stems.pret_conj == "ar" then
add_tense("pret", stems.pret_base, "ei", "aste", "ou",
{{form = VAR_BR .. "amos", footnotes = {"[Brazilian Portuguese]"}}, {form = VAR_PT .. "ámos", footnotes = {"[European Portuguese]"}}}, "astes", "aram")
elseif stems.pret_conj == "er" then
add_tense("pret", stems.pret_base, "i", "este", "eu", "emos", "estes", "eram")
else
add_tense("pret", stems.pret_base, "i", "iste", "iu", "imos", "istes", "iram")
end
-- * at the beginning of the ending means to remove a final accent from the stem.
-- ** is similar but is "conditional" on a consonant preceding the final vowel.
add_tense("plup", stems.plup, "**ra", "**ras", "**ra", "ramos", "reis", "**ram")
add_tense("impf_sub", stems.impf_sub, "**sse", "**sses", "**sse", "ssemos", "sseis", "**ssem")
add_tense("fut_sub", stems.fut_sub, "*r", "**res", "*r", "*rmos", "*rdes", "**rem")
local mark = TEMP_MESOCLITIC_INSERTION_POINT
add_tense("fut", stems.fut, mark .. "ei", mark .. "ás", mark .. "á", mark .. "emos", mark .. "eis", mark .. "ão")
add_tense("cond", stems.cond, mark .. "ia", mark .. "ias", mark .. "ia", mark .. "íamos", mark .. "íeis", mark .. "iam")
-- Different stems for different parts of the personal infinitive to correctly handle forms of [[sair]] and [[pôr]].
add_tense("pers_inf", base.non_prefixed_verb, "", {}, "", {}, {}, {})
add_tense("pers_inf", stems.pers_inf, {}, "**res", {}, "*rmos", "*rdes", "**rem")
end
local function add_non_finite_forms(base)
local stems = base.this_stems
local function addit(slot, stems, ending, footnotes)
add3(base, slot, stems, ending, footnotes)
end
insert_form(base, "infinitive", {form = base.verb})
-- Also insert "infinitive + reflexive pronoun" combinations if we're handling a reflexive verb. See comment below for
-- "gerund + reflexive pronoun" combinations.
if base.refl then
for _, persnum in ipairs(person_number_list) do
insert_form(base, "infinitive_" .. persnum, {form = base.verb})
end
end
-- verbs in -por have the gerund overridden
local ger_ending = base.conj == "ar" and "ando" or base.conj == "er" and "endo" or "indo"
addit("gerund", stems.pres_unstressed, ger_ending)
-- Also insert "gerund + reflexive pronoun" combinations if we're handling a reflexive verb. We insert exactly the same
-- form as for the bare gerund; later on in add_reflexive_or_fixed_clitic_to_forms(), we add the appropriate clitic
-- pronouns. It's important not to do this for non-reflexive verbs, because in that case, the clitic pronouns won't be
-- added, and {{pt-verb form of}} will wrongly consider all these combinations as possible inflections of the bare
-- gerund. Thanks to [[User:JeffDoozan]] for this bug fix.
if base.refl then
for _, persnum in ipairs(person_number_list) do
addit("gerund_" .. persnum, stems.pres_unstressed, ger_ending)
end
end
-- Skip the long/short past participle footnotes if called from {{pt-verb}} so they don't show in the headword.
local long_pp_footnotes =
stems.short_pp_ms and base.alternant_multiword_spec.source_template ~= "pt-verb" and {long_pp_footnote} or nil
addit("pp_ms", stems.pp_ms, "", long_pp_footnotes)
if not base.pp_inv then
addit("pp_fs", stems.pp_fs, "", long_pp_footnotes)
addit("pp_mp", stems.pp_ms, "s", long_pp_footnotes)
addit("pp_fp", stems.pp_fs, "s", long_pp_footnotes)
end
if stems.short_pp_ms then
local short_pp_footnotes =
stems.short_pp_ms and base.alternant_multiword_spec.source_template ~= "pt-verb" and {short_pp_footnote} or nil
addit("short_pp_ms", stems.short_pp_ms, "", short_pp_footnotes)
if not base.pp_inv then
addit("short_pp_fs", stems.short_pp_fs, "", short_pp_footnotes)
addit("short_pp_mp", stems.short_pp_ms, "s", short_pp_footnotes)
addit("short_pp_fp", stems.short_pp_fs, "s", short_pp_footnotes)
end
end
end
local function copy_forms_to_imperatives(base)
-- Copy pres3s to imperative since they are almost always the same.
insert_forms(base, "imp_2s", iut.map_forms(base.forms.pres_3s, function(form) return form end))
if not skip_slot(base, "imp_2p") then
-- Copy pres2p to imperative 2p minus -s since they are almost always the same.
-- But not if there's an override, to avoid possibly throwing an error.
insert_forms(base, "imp_2p", iut.map_forms(base.forms.pres_2p, function(form)
if not form:find("s$") then
error("Can't derive second-person plural imperative from second-person plural present indicative " ..
"because form '" .. form .. "' doesn't end in -s")
end
return rsub(form, "s$", "")
end))
end
-- Copy subjunctives to imperatives, unless there's an override for the given slot (as with the imp_1p of [[ir]]).
for _, persnum in ipairs({"3s", "1p", "3p"}) do
local from = "pres_sub_" .. persnum
local to = "imp_" .. persnum
insert_forms(base, to, iut.map_forms(base.forms[from], function(form) return form end))
end
end
local function process_slot_overrides(base, filter_slot, reflexive_only)
local overrides = reflexive_only and base.basic_reflexive_only_overrides or base.basic_overrides
for slot, forms in pairs(overrides) do
if not filter_slot or filter_slot(slot) then
add3(base, slot, forms, "", nil, "allow overrides")
end
end
end
-- Prefix `form` with `clitic`, adding fixed text `between` between them. Add links as appropriate unless the user
-- requested no links. Check whether form already has brackets (as will be the case if the form has a fixed clitic).
local function prefix_clitic_to_form(base, clitic, between, form)
if base.alternant_multiword_spec.args.noautolinkverb then
return clitic .. between .. form
else
local clitic_pref = "[[" .. clitic .. "]]" .. between
if form:find("%[%[") then
return clitic_pref .. form
else
return clitic_pref .. "[[" .. form .. "]]"
end
end
end
-- Add the appropriate clitic pronouns in `clitics` to the forms in `base_slot`. `store_cliticized_form` is a function
-- of three arguments (clitic, formobj, cliticized_form) and should store the cliticized form for the specified clitic
-- and form object.
local function suffix_clitic_to_forms(base, base_slot, clitics, store_cliticized_form)
if not base.forms[base_slot] then
-- This can happen, e.g. in only3s/only3sp/only3p verbs.
return
end
local autolink = not base.alternant_multiword_spec.args.noautolinkverb
for _, formobj in ipairs(base.forms[base_slot]) do
for _, clitic in ipairs(clitics) do
local cliticized_form
if formobj.form:find(TEMP_MESOCLITIC_INSERTION_POINT) then
-- mesoclisis in future and conditional
local infinitive, suffix = rmatch(formobj.form, "^(.*)" .. TEMP_MESOCLITIC_INSERTION_POINT .. "(.*)$")
if not infinitive then
error("Internal error: Can't find mesoclitic insertion point in slot '" .. base_slot .. "', form '" ..
formobj.form .. "'")
end
local full_form = infinitive .. suffix
if autolink and not infinitive:find("%[%[") then
infinitive = "[[" .. infinitive .. "]]"
end
cliticized_form =
autolink and infinitive .. "-[[" .. clitic .. "]]-[[" .. full_form .. "|" .. suffix .. "]]" or
infinitive .. "-" .. clitic .. "-" .. suffix
else
local clitic_suffix = autolink and "-[[" .. clitic .. "]]" or "-" .. clitic
local form_needs_link = autolink and not formobj.form:find("%[%[")
if base_slot:find("1p$") then
-- Final -s disappears: esbaldávamos + nos -> esbaldávamo-nos, etc.
cliticized_form = formobj.form:gsub("s$", "")
if form_needs_link then
cliticized_form = "[[" .. formobj.form .. "|" .. cliticized_form .. "]]"
end
else
cliticized_form = formobj.form
if form_needs_link then
cliticized_form = "[[" .. cliticized_form .. "]]"
end
end
cliticized_form = cliticized_form .. clitic_suffix
end
store_cliticized_form(clitic, formobj, cliticized_form)
end
end
end
-- Add a reflexive pronoun or fixed clitic (FIXME: not working), as appropriate to the base forms that were generated.
-- `do_joined` means to do only the forms where the pronoun is joined to the end of the form; otherwise, do only the
-- forms where it is not joined and precedes the form.
local function add_reflexive_or_fixed_clitic_to_forms(base, do_reflexive, do_joined)
for _, slotaccel in ipairs(base.alternant_multiword_spec.verb_slots_basic) do
local slot, accel = unpack(slotaccel)
local clitic
if not do_reflexive then
clitic = base.clitic
elseif slot:find("[123]") then
local persnum = slot:match("^.*_(.-)$")
clitic = person_number_to_reflexive_pronoun[persnum]
else
clitic = "se"
end
if base.forms[slot] then
if do_reflexive and slot:find("^pp_") or slot == "infinitive_linked" then
-- do nothing with reflexive past participles or with infinitive linked (handled at the end)
elseif slot:find("^neg_imp_") then
error("Internal error: Should not have forms set for negative imperative at this stage")
else
local slot_has_suffixed_clitic = not slot:find("_sub")
-- Maybe generate non-reflexive parts and separated syntactic variants for use in {{pt-verb form of}}.
-- See comment in add_slots() above `need_special_verb_form_of_slots`. Check for do_joined so we only
-- run this code once.
if do_reflexive and do_joined and base.alternant_multiword_spec.source_template == "pt-verb form of" and
-- Skip personal variants of infinitives and gerunds so we don't think [[esbaldando]] is a
-- non-reflexive equivalent of [[esbaldando-me]].
not slot:find("infinitive_") and not slot:find("gerund_") then
-- Clone the forms because we will be destructively modifying them just below, adding the reflexive
-- pronoun.
insert_forms(base, slot .. "_non_reflexive", mw.clone(base.forms[slot]))
if slot_has_suffixed_clitic then
insert_forms(base, slot .. "_variant", iut.map_forms(base.forms[slot], function(form)
return prefix_clitic_to_form(base, clitic, " ... ", form)
end))
end
end
if slot_has_suffixed_clitic then
if do_joined then
suffix_clitic_to_forms(base, slot, {clitic},
function(clitic, formobj, cliticized_form)
formobj.form = cliticized_form
end
)
end
elseif not do_joined then
-- Add clitic as separate word before all other forms.
for _, form in ipairs(base.forms[slot]) do
form.form = prefix_clitic_to_form(base, clitic, " ", form.form)
end
end
end
end
end
end
local function handle_infinitive_linked(base)
-- Compute linked versions of potential lemma slots, for use in {{pt-verb}}.
-- We substitute the original lemma (before removing links) for forms that
-- are the same as the lemma, if the original lemma has links.
for _, slot in ipairs({"infinitive"}) do
insert_forms(base, slot .. "_linked", iut.map_forms(base.forms[slot], function(form)
if form == base.lemma and rfind(base.linked_lemma, "%[%[") then
return base.linked_lemma
else
return form
end
end))
end
end
local function generate_negative_imperatives(base)
-- Copy subjunctives to negative imperatives, preceded by "não".
for _, persnum in ipairs(neg_imp_person_number_list) do
local from = "pres_sub_" .. persnum
local to = "neg_imp_" .. persnum
insert_forms(base, to, iut.map_forms(base.forms[from], function(form)
if base.alternant_multiword_spec.args.noautolinkverb then
return "não " .. form
elseif form:find("%[%[") then
-- already linked, e.g. when reflexive
return "[[não]] " .. form
else
return "[[não]] [[" .. form .. "]]"
end
end))
end
end
-- Process specs given by the user using 'addnote[SLOTSPEC][FOOTNOTE][FOOTNOTE][...]'.
local function process_addnote_specs(base)
for _, spec in ipairs(base.addnote_specs) do
for _, slot_spec in ipairs(spec.slot_specs) do
slot_spec = "^" .. slot_spec .. "$"
for slot, forms in pairs(base.forms) do
if rfind(slot, slot_spec) then
-- To save on memory, side-effect the existing forms.
for _, form in ipairs(forms) do
form.footnotes = iut.combine_footnotes(form.footnotes, spec.footnotes)
end
end
end
end
end
end
local function add_missing_links_to_forms(base)
-- Any forms without links should get them now. Redundant ones will be stripped later.
for slot, forms in pairs(base.forms) do
for _, form in ipairs(forms) do
if not form.form:find("%[%[") then
form.form = "[[" .. form.form .. "]]"
end
end
end
end
-- Remove special characters added to future and conditional forms to indicate mesoclitic insertion points.
local function remove_mesoclitic_insertion_points(base)
for slot, forms in pairs(base.forms) do
if slot:find("^fut_") or slot:find("^cond_") then
for _, form in ipairs(forms) do
form.form = form.form:gsub(TEMP_MESOCLITIC_INSERTION_POINT, "")
end
end
end
end
-- If called from {{pt-verb}}, remove superseded forms; otherwise add a footnote indicating they are superseded.
local function process_superseded_forms(base)
if base.alternant_multiword_spec.source_template == "pt-verb" then
for slot, forms in pairs(base.forms) do
-- As an optimization, check if there are any superseded forms and don't do anything if not.
local saw_superseded = false
for _, form in ipairs(forms) do
if form.form:find(VAR_SUPERSEDED) then
saw_superseded = true
break
end
end
if saw_superseded then
base.forms[slot] = iut.flatmap_forms(base.forms[slot], function(form)
if form:find(VAR_SUPERSEDED) then
return {}
else
return {form}
end
end)
end
end
else
for slot, forms in pairs(base.forms) do
for _, form in ipairs(forms) do
if form.form:find(VAR_SUPERSEDED) then
form.footnotes = iut.combine_footnotes(form.footnotes, {"[superseded]"})
end
end
end
end
end
local function conjugate_verb(base)
for _, vowel_alt in ipairs(base.vowel_alt_stems) do
construct_stems(base, vowel_alt)
add_present_indic(base)
add_present_subj(base)
end
add_finite_non_present(base)
add_non_finite_forms(base)
-- do non-reflexive non-imperative slot overrides
process_slot_overrides(base, function(slot)
return not slot:find("^imp_") and not slot:find("^neg_imp_")
end)
-- This should happen after process_slot_overrides() in case a derived slot is based on an override
-- (as with the imp_3s of [[dar]], [[estar]]).
copy_forms_to_imperatives(base)
-- do non-reflexive positive imperative slot overrides
process_slot_overrides(base, function(slot)
return slot:find("^imp_")
end)
-- We need to add joined reflexives, then joined and non-joined clitics, then non-joined reflexives, so we get
-- [[esbalda-te]] but [[não]] [[te]] [[esbalde]].
if base.refl then
-- This should happen after remove_monosyllabic_accents() so the * marking the preservation of monosyllabic
-- accents doesn't end up in the middle of a word.
add_reflexive_or_fixed_clitic_to_forms(base, "do reflexive", "do joined")
process_slot_overrides(base, nil, "do reflexive") -- do reflexive-only slot overrides
add_reflexive_or_fixed_clitic_to_forms(base, "do reflexive", false)
end
-- This should happen after add_reflexive_or_fixed_clitic_to_forms() so negative imperatives get the reflexive pronoun
-- and clitic in them.
generate_negative_imperatives(base)
-- do non-reflexive negative imperative slot overrides
-- FIXME: What about reflexive negative imperatives?
process_slot_overrides(base, function(slot)
return slot:find("^neg_imp_")
end)
-- This should happen before add_missing_links_to_forms() so that the comparison `form == base.lemma`
-- in handle_infinitive_linked() works correctly and compares unlinked forms to unlinked forms.
handle_infinitive_linked(base)
process_addnote_specs(base)
if not base.alternant_multiword_spec.args.noautolinkverb then
add_missing_links_to_forms(base)
end
remove_mesoclitic_insertion_points(base)
process_superseded_forms(base)
end
local function parse_indicator_spec(angle_bracket_spec)
-- Store the original angle bracket spec so we can reconstruct the overall conj spec with the lemma(s) in them.
local base = {
angle_bracket_spec = angle_bracket_spec,
user_basic_overrides = {},
user_stems = {},
addnote_specs = {},
}
local function parse_err(msg)
error(msg .. ": " .. angle_bracket_spec)
end
local function fetch_footnotes(separated_group)
local footnotes
for j = 2, #separated_group - 1, 2 do
if separated_group[j + 1] ~= "" then
parse_err("Extraneous text after bracketed footnotes: '" .. table.concat(separated_group) .. "'")
end
if not footnotes then
footnotes = {}
end
table.insert(footnotes, separated_group[j])
end
return footnotes
end
local inside = angle_bracket_spec:match("^<(.*)>$")
assert(inside)
if inside == "" then
return base
end
local segments = iut.parse_balanced_segment_run(inside, "[", "]")
local dot_separated_groups = iut.split_alternating_runs(segments, "%.")
for i, dot_separated_group in ipairs(dot_separated_groups) do
local first_element = dot_separated_group[1]
if first_element == "addnote" then
local spec_and_footnotes = fetch_footnotes(dot_separated_group)
if #spec_and_footnotes < 2 then
parse_err("Spec with 'addnote' should be of the form 'addnote[SLOTSPEC][FOOTNOTE][FOOTNOTE][...]'")
end
local slot_spec = table.remove(spec_and_footnotes, 1)
local slot_spec_inside = rmatch(slot_spec, "^%[(.*)%]$")
if not slot_spec_inside then
parse_err("Internal error: slot_spec " .. slot_spec .. " should be surrounded with brackets")
end
local slot_specs = rsplit(slot_spec_inside, ",")
-- FIXME: Here, [[Module:it-verb]] called strip_spaces(). Generally we don't do this. Should we?
table.insert(base.addnote_specs, {slot_specs = slot_specs, footnotes = spec_and_footnotes})
elseif indicator_flags[first_element] then
if #dot_separated_group > 1 then
parse_err("No footnotes allowed with '" .. first_element .. "' spec")
end
if base[first_element] then
parse_err("Spec '" .. first_element .. "' specified twice")
end
base[first_element] = true
elseif rfind(first_element, ":") then
local colon_separated_groups = iut.split_alternating_runs(dot_separated_group, "%s*:%s*")
local first_element = colon_separated_groups[1][1]
if #colon_separated_groups[1] > 1 then
parse_err("Can't attach footnotes directly to '" .. first_element .. "' spec; attach them to the " ..
"colon-separated values following the initial colon")
end
if overridable_stems[first_element] then
if base.user_stems[first_element] then
parse_err("Overridable stem '" .. first_element .. "' specified twice")
end
table.remove(colon_separated_groups, 1)
base.user_stems[first_element] = overridable_stems[first_element](colon_separated_groups,
{prefix = first_element, base = base, parse_err = parse_err, fetch_footnotes = fetch_footnotes})
else -- assume a basic override; we validate further later when the possible slots are available
if base.user_basic_overrides[first_element] then
parse_err("Basic override '" .. first_element .. "' specified twice")
end
table.remove(colon_separated_groups, 1)
base.user_basic_overrides[first_element] = allow_multiple_values(colon_separated_groups,
{prefix = first_element, base = base, parse_err = parse_err, fetch_footnotes = fetch_footnotes})
end
else
local comma_separated_groups = iut.split_alternating_runs(dot_separated_group, "%s*,%s*")
for j = 1, #comma_separated_groups do
local alt = comma_separated_groups[j][1]
if not vowel_alternants[alt] then
if #comma_separated_groups == 1 then
parse_err("Unrecognized spec or vowel alternant '" .. alt .. "'")
else
parse_err("Unrecognized vowel alternant '" .. alt .. "'")
end
end
if base.vowel_alt then
for _, existing_alt in ipairs(base.vowel_alt) do
if existing_alt.form == alt then
parse_err("Vowel alternant '" .. alt .. "' specified twice")
end
end
else
base.vowel_alt = {}
end
table.insert(base.vowel_alt, {form = alt, footnotes = fetch_footnotes(comma_separated_groups[j])})
end
end
end
return base
end
-- Normalize all lemmas, substituting the pagename for blank lemmas and adding links to multiword lemmas.
local function normalize_all_lemmas(alternant_multiword_spec, head)
-- (1) Add links to all before and after text. Remember the original text so we can reconstruct the verb spec later.
if not alternant_multiword_spec.args.noautolinktext then
iut.add_links_to_before_and_after_text(alternant_multiword_spec, "remember original")
end
-- (2) Remove any links from the lemma, but remember the original form
-- so we can use it below in the 'lemma_linked' form.
iut.map_word_specs(alternant_multiword_spec, function(base)
if base.lemma == "" then
base.lemma = head
end
base.user_specified_lemma = base.lemma
base.lemma = m_links.remove_links(base.lemma)
local refl_verb = base.lemma
local verb, refl = rmatch(refl_verb, "^(.-)%-(se)$")
if not verb then
verb, refl = refl_verb, nil
end
base.user_specified_verb = verb
base.refl = refl
base.verb = base.user_specified_verb
local linked_lemma
if alternant_multiword_spec.args.noautolinkverb or base.user_specified_lemma:find("%[%[") then
linked_lemma = base.user_specified_lemma
elseif base.refl then
-- Reconstruct the linked lemma with separate links around base verb and reflexive pronoun.
linked_lemma = base.user_specified_verb == base.verb and "[[" .. base.user_specified_verb .. "]]" or
"[[" .. base.verb .. "|" .. base.user_specified_verb .. "]]"
linked_lemma = linked_lemma .. (refl and "-[[" .. refl .. "]]" or "")
else
-- Add links to the lemma so the user doesn't specifically need to, since we preserve
-- links in multiword lemmas and include links in non-lemma forms rather than allowing
-- the entire form to be a link.
linked_lemma = iut.add_links(base.user_specified_lemma)
end
base.linked_lemma = linked_lemma
end)
end
local function detect_indicator_spec(base)
if (base.only3s and 1 or 0) + (base.only3sp and 1 or 0) + (base.only3p and 1 or 0) > 1 then
error("Only one of 'only3s', 'only3sp' and 'only3p' can be specified")
end
base.forms = {}
base.stems = {}
base.basic_overrides = {}
base.basic_reflexive_only_overrides = {}
if not base.no_built_in then
for _, built_in_conj in ipairs(built_in_conjugations) do
if type(built_in_conj.match) == "function" then
base.prefix, base.non_prefixed_verb = built_in_conj.match(base.verb)
elseif built_in_conj.match:find("^%^") and rsub(built_in_conj.match, "^%^", "") == base.verb then
-- begins with ^, for exact match, and matches
base.prefix, base.non_prefixed_verb = "", base.verb
else
base.prefix, base.non_prefixed_verb = rmatch(base.verb, "^(.*)(" .. built_in_conj.match .. ")$")
end
if base.prefix then
-- we found a built-in verb
for stem, forms in pairs(built_in_conj.forms) do
if type(forms) == "function" then
forms = forms(base, base.prefix)
end
if stem:find("^refl_") then
stem = stem:gsub("^refl_", "")
if not base.alternant_multiword_spec.verb_slots_basic_map[stem] then
error("Internal error: setting for 'refl_" .. stem .. "' does not refer to a basic verb slot")
end
base.basic_reflexive_only_overrides[stem] = forms
elseif base.alternant_multiword_spec.verb_slots_basic_map[stem] then
-- an individual form override of a basic form
base.basic_overrides[stem] = forms
else
base.stems[stem] = forms
end
end
break
end
end
end
-- Override built-in-verb stems and overrides with user-specified ones.
for stem, values in pairs(base.user_stems) do
base.stems[stem] = values
end
for override, values in pairs(base.user_basic_overrides) do
if not base.alternant_multiword_spec.verb_slots_basic_map[override] then
error("Unrecognized override '" .. override .. "': " .. base.angle_bracket_spec)
end
base.basic_overrides[override] = values
end
base.prefix = base.prefix or ""
base.non_prefixed_verb = base.non_prefixed_verb or base.verb
local inf_stem, suffix = rmatch(base.non_prefixed_verb, "^(.*)([aeioô]r)$")
if not inf_stem then
error("Unrecognized infinitive: " .. base.verb)
end
base.inf_stem = inf_stem
suffix = suffix == "ôr" and "or" or suffix
base.conj = suffix
base.conj_vowel = suffix == "ar" and "á" or suffix == "ir" and "í" or "ê"
base.frontback = suffix == "ar" and "back" or "front"
if base.stems.vowel_alt then -- built-in verb with specified vowel alternation
if base.vowel_alt then
error(base.verb .. " is a recognized built-in verb, and should not have vowel alternations specified with it")
end
base.vowel_alt = iut.convert_to_general_list_form(base.stems.vowel_alt)
end
-- Propagate built-in-verb indicator flags to `base` and combine with user-specified flags.
for indicator_flag, _ in pairs(indicator_flags) do
base[indicator_flag] = base[indicator_flag] or base.stems[indicator_flag]
end
-- Convert vowel alternation indicators into stems.
local vowel_alt = base.vowel_alt or {{form = "+"}}
base.vowel_alt_stems = apply_vowel_alternations(base.inf_stem, vowel_alt)
for _, vowel_alt_stems in ipairs(base.vowel_alt_stems) do
if vowel_alt_stems.err then
error("To use '" .. vowel_alt_stems.altobj.form .. "', present stem '" .. base.prefix .. base.inf_stem .. "' " ..
vowel_alt_stems.err)
end
end
end
local function detect_all_indicator_specs(alternant_multiword_spec)
-- Propagate some settings up; some are used internally, others by [[Module:pt-headword]].
iut.map_word_specs(alternant_multiword_spec, function(base)
-- Internal indicator flags. Do these before calling detect_indicator_spec() because add_slots() uses them.
for _, prop in ipairs { "refl", "clitic" } do
if base[prop] then
alternant_multiword_spec[prop] = true
end
end
base.alternant_multiword_spec = alternant_multiword_spec
end)
add_slots(alternant_multiword_spec)
alternant_multiword_spec.vowel_alt = {}
iut.map_word_specs(alternant_multiword_spec, function(base)
detect_indicator_spec(base)
-- User-specified indicator flags. Do these after calling detect_indicator_spec() because the latter may set these
-- indicators for built-in verbs.
for prop, _ in pairs(indicator_flags) do
if base[prop] then
alternant_multiword_spec[prop] = true
end
end
-- Vowel alternants. Do these after calling detect_indicator_spec() because the latter sets base.vowel_alt for
-- built-in verbs.
if base.vowel_alt then
for _, altobj in ipairs(base.vowel_alt) do
m_table.insertIfNot(alternant_multiword_spec.vowel_alt, altobj.form)
end
end
end)
end
local function add_categories_and_annotation(alternant_multiword_spec, base, multiword_lemma)
local function insert_ann(anntype, value)
m_table.insertIfNot(alternant_multiword_spec.annotation[anntype], value)
end
local function insert_cat(cat, also_when_multiword)
-- Don't place multiword terms in categories like 'Portuguese verbs ending in -ar' to avoid spamming the
-- categories with such terms.
if also_when_multiword or not multiword_lemma then
m_table.insertIfNot(alternant_multiword_spec.categories, "Portuguese " .. cat)
end
end
if check_for_red_links and alternant_multiword_spec.source_template == "pt-conj" and multiword_lemma then
for _, slot_and_accel in ipairs(alternant_multiword_spec.all_verb_slots) do
local slot = slot_and_accel[1]
local forms = base.forms[slot]
local must_break = false
if forms then
for _, form in ipairs(forms) do
if not form.form:find("%[%[") then
local title = mw.title.new(form.form)
if title and not title.exists then
insert_cat("verbs with red links in their inflection tables")
must_break = true
break
end
end
end
end
if must_break then
break
end
end
end
insert_cat("verbs ending in -" .. base.conj)
if base.irreg then
insert_ann("irreg", "irregular")
insert_cat("irregular verbs")
else
insert_ann("irreg", "regular")
end
if base.only3s then
insert_ann("defective", "impersonal")
insert_cat("impersonal verbs")
elseif base.only3sp then
insert_ann("defective", "third-person only")
insert_cat("third-person-only verbs")
elseif base.only3p then
insert_ann("defective", "third-person plural only")
insert_cat("third-person-plural-only verbs")
elseif base.no_pres_stressed or base.no_pres1_and_sub then
insert_ann("defective", "defective")
insert_cat("defective verbs")
else
insert_ann("defective", "regular")
end
if base.stems.short_pp then
insert_ann("short_pp", "irregular short past participle")
insert_cat("verbs with irregular short past participle")
else
insert_ann("short_pp", "regular")
end
if base.clitic then
insert_cat("verbs with lexical clitics")
end
if base.refl then
insert_cat("reflexive verbs")
end
if base.e_ei_cat then
insert_ann("vowel_alt", "''e'' becomes ''ei'' when stressed")
insert_cat("verbs with e becoming ei when stressed")
elseif not base.vowel_alt then
insert_ann("vowel_alt", "non-alternating")
else
for _, alt in ipairs(base.vowel_alt) do
if alt.form == "+" then
insert_ann("vowel_alt", "non-alternating")
else
insert_ann("vowel_alt", vowel_alternant_to_desc[alt.form])
insert_cat("verbs with " .. vowel_alternant_to_cat[alt.form])
end
end
end
local cons_alt = base.stems.cons_alt
if cons_alt == nil then
if base.conj == "ar" then
if base.inf_stem:find("ç$") then
cons_alt = "c-ç"
elseif base.inf_stem:find("c$") then
cons_alt = "c-qu"
elseif base.inf_stem:find("g$") then
cons_alt = "g-gu"
end
else
if base.no_pres_stressed or base.no_pres1_and_sub then
cons_alt = nil -- no e.g. c-ç alternation in this case
elseif base.inf_stem:find("c$") then
cons_alt = "c-ç"
elseif base.inf_stem:find("qu$") then
cons_alt = "c-qu"
elseif base.inf_stem:find("g$") then
cons_alt = "g-j"
elseif base.inf_stem:find("gu$") then
cons_alt = "g-gu"
end
end
end
if cons_alt then
local desc = cons_alt .. " alternation"
insert_ann("cons_alt", desc)
insert_cat("verbs with " .. desc)
else
insert_ann("cons_alt", "non-alternating")
end
end
-- Compute the categories to add the verb to, as well as the annotation to display in the
-- conjugation title bar. We combine the code to do these functions as both categories and
-- title bar contain similar information.
local function compute_categories_and_annotation(alternant_multiword_spec)
alternant_multiword_spec.categories = {}
local ann = {}
alternant_multiword_spec.annotation = ann
ann.irreg = {}
ann.short_pp = {}
ann.defective = {}
ann.vowel_alt = {}
ann.cons_alt = {}
local multiword_lemma = false
for _, form in ipairs(alternant_multiword_spec.forms.infinitive) do
if form.form:find(" ") then
multiword_lemma = true
break
end
end
iut.map_word_specs(alternant_multiword_spec, function(base)
add_categories_and_annotation(alternant_multiword_spec, base, multiword_lemma)
end)
local ann_parts = {}
local irreg = table.concat(ann.irreg, " or ")
if irreg ~= "" and irreg ~= "regular" then
table.insert(ann_parts, irreg)
end
local short_pp = table.concat(ann.short_pp, " or ")
if short_pp ~= "" and short_pp ~= "regular" then
table.insert(ann_parts, short_pp)
end
local defective = table.concat(ann.defective, " or ")
if defective ~= "" and defective ~= "regular" then
table.insert(ann_parts, defective)
end
local vowel_alt = table.concat(ann.vowel_alt, " or ")
if vowel_alt ~= "" and vowel_alt ~= "non-alternating" then
table.insert(ann_parts, vowel_alt)
end
local cons_alt = table.concat(ann.cons_alt, " or ")
if cons_alt ~= "" and cons_alt ~= "non-alternating" then
table.insert(ann_parts, cons_alt)
end
alternant_multiword_spec.annotation = table.concat(ann_parts, "; ")
end
local function show_forms(alternant_multiword_spec)
local lemmas = alternant_multiword_spec.forms.infinitive
alternant_multiword_spec.lemmas = lemmas -- save for later use in make_table()
if alternant_multiword_spec.forms.short_pp_ms then
alternant_multiword_spec.has_short_pp = true
end
local reconstructed_verb_spec = iut.reconstruct_original_spec(alternant_multiword_spec)
local function transform_accel_obj(slot, formobj, accel_obj)
-- No accelerators for negative imperatives, which are always multiword and derived directly from the
-- present subjunctive.
if slot:find("^neg_imp") then
return nil
end
if accel_obj then
if slot:find("^pp_") then
accel_obj.form = slot
elseif slot == "gerund" then
accel_obj.form = "gerund-" .. reconstructed_verb_spec
else
accel_obj.form = "verb-form-" .. reconstructed_verb_spec
end
end
return accel_obj
end
-- Italicize superseded forms.
local function generate_link(data)
local formval_for_link = data.form.formval_for_link
if formval_for_link:find(VAR_SUPERSEDED) then
formval_for_link = formval_for_link:gsub(VAR_SUPERSEDED, "")
return m_links.full_link({lang = lang, term = formval_for_link, tr = "-", accel = data.form.accel_obj},
"term") .. iut.get_footnote_text(data.form.footnotes, data.footnote_obj)
end
end
local props = {
lang = lang,
lemmas = lemmas,
transform_accel_obj = transform_accel_obj,
canonicalize = function(form) return export.remove_variant_codes(form, "keep superseded") end,
generate_link = generate_link,
slot_list = alternant_multiword_spec.verb_slots_basic,
}
iut.show_forms(alternant_multiword_spec.forms, props)
alternant_multiword_spec.footnote_basic = alternant_multiword_spec.forms.footnote
end
local notes_template = [=[
<div style="width:100%;text-align:left;background:#d9ebff">
<div style="display:inline-block;text-align:left;padding-left:1em;padding-right:1em">
{footnote}
</div></div>]=]
local basic_table = [=[
{description}<div class="NavFrame">
<div class="NavHead" align=center> Conjugation of {title} (See [[Appendix:Portuguese verbs]])</div>
<div class="NavContent" align="left">
{\op}| class="inflection-table" style="background:#F6F6F6; text-align: left; border: 1px solid #999999;" cellpadding="3" cellspacing="0"
|-
! style="border: 1px solid #999999; background:#B0B0B0" rowspan="2" |
! style="border: 1px solid #999999; background:#D0D0D0" colspan="3" | Singular
! style="border: 1px solid #999999; background:#D0D0D0" colspan="3" | Plural
|-
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | First-person<br />(<<eu>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Second-person<br />(<<tu>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Third-person<br />(<<ele>> / <<ela>> / <<você>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | First-person<br />(<<nós>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Second-person<br />(<<vós>>)
! style="border: 1px solid #999999; background:#D0D0D0; width:12.5%" | Third-person<br />(<<eles>> / <<elas>> / <<vocês>>)
|-
! style="border: 1px solid #999999; background:#c498ff" colspan="7" | ''<span title="infinitivo">Infinitive</span>''
|-
! style="border: 1px solid #999999; background:#a478df" | '''<span title="infinitivo impessoal">Impersonal</span>'''
| style="border: 1px solid #999999; vertical-align: top;" colspan="6" | {infinitive}
|-
! style="border: 1px solid #999999; background:#a478df" | '''<span title="infinitivo pessoal">Personal</span>'''
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pers_inf_3p}
|-
! style="border: 1px solid #999999; background:#98ffc4" colspan="7" | ''<span title="gerúndio">Gerund</span>''
|-
| style="border: 1px solid #999999; background:#78dfa4" |
| style="border: 1px solid #999999; vertical-align: top;" colspan="6" | {gerund}
|-{pp_clause}
! style="border: 1px solid #999999; background:#d0dff4" colspan="7" | ''<span title="indicativo">Indicative</span>''
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="presente">Present</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pres_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito imperfeito">Imperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {impf_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito perfeito">Preterite</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pret_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pret_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="pretérito mais-que-perfeito simples">Pluperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {plup_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {plup_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="futuro do presente">Future</span>
| style="border: 1px solid #999999; vertical-align: top;" | {fut_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_3p}
|-
! style="border: 1px solid #999999; background:#b0bfd4" | <span title="condicional / futuro do pretérito">Conditional</span>
| style="border: 1px solid #999999; vertical-align: top;" | {cond_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {cond_3p}
|-
! style="border: 1px solid #999999; background:#d0f4d0" colspan="7" | ''<span title="conjuntivo (pt) / subjuntivo (br)">Subjunctive</span>''
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title=" presente do conjuntivo (pt) / subjuntivo (br)">Present</span>
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {pres_sub_3p}
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title="pretérito imperfeito do conjuntivo (pt) / subjuntivo (br)">Imperfect</span>
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {impf_sub_3p}
|-
! style="border: 1px solid #999999; background:#b0d4b0" | <span title="futuro do conjuntivo (pt) / subjuntivo (br)">Future</span>
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_1s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {fut_sub_3p}
|-
! style="border: 1px solid #999999; background:#f4e4d0" colspan="7" | ''<span title="imperativo">Imperative</span>''
|-
! style="border: 1px solid #999999; background:#d4c4b0" | <span title="imperativo afirmativo">Affirmative</span>
| style="border: 1px solid #999999; vertical-align: top;" rowspan="2" |
| style="border: 1px solid #999999; vertical-align: top;" | {imp_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {imp_3p}
|-
! style="border: 1px solid #999999; background:#d4c4b0" | <span title="imperativo negativo">Negative</span> (<<não>>)
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_2s}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_3s}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_1p}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_2p}
| style="border: 1px solid #999999; vertical-align: top;" | {neg_imp_3p}
|{\cl}{notes_clause}</div></div>]=]
local double_pp_template = [=[
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio irregular">Short past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {short_pp_fp}
|-
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio regular">Long past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fp}
|-]=]
local single_pp_template = [=[
! style="border: 1px solid #999999; background:#ffc498" colspan="7" | ''<span title="particípio passado">Past participle</span>''
|-
! style="border: 1px solid #999999; background:#dfa478" | Masculine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_ms}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_mp}
|-
! style="border: 1px solid #999999; background:#dfa478" | Feminine
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fs}
| style="border: 1px solid #999999; vertical-align: top;" colspan="3" | {pp_fp}
|-]=]
local function make_table(alternant_multiword_spec)
local forms = alternant_multiword_spec.forms
forms.title = link_term(alternant_multiword_spec.lemmas[1].form)
if alternant_multiword_spec.annotation ~= "" then
forms.title = forms.title .. " (" .. alternant_multiword_spec.annotation .. ")"
end
forms.description = ""
-- Format the table.
forms.footnote = alternant_multiword_spec.footnote_basic
forms.notes_clause = forms.footnote ~= "" and format(notes_template, forms) or ""
-- has_short_pp is computed in show_forms().
local pp_template = alternant_multiword_spec.has_short_pp and double_pp_template or single_pp_template
forms.pp_clause = format(pp_template, forms)
local table_with_pronouns = rsub(basic_table, "<<(.-)>>", link_term)
return format(table_with_pronouns, forms)
end
-- Externally callable function to parse and conjugate a verb given user-specified arguments.
-- Return value is WORD_SPEC, an object where the conjugated forms are in `WORD_SPEC.forms`
-- for each slot. If there are no values for a slot, the slot key will be missing. The value
-- for a given slot is a list of objects {form=FORM, footnotes=FOOTNOTES}.
function export.do_generate_forms(args, source_template, headword_head)
local PAGENAME = mw.title.getCurrentTitle().text
local function in_template_space()
return mw.title.getCurrentTitle().nsText == "සැකිල්ල"
end
-- Determine the verb spec we're being asked to generate the conjugation of. This may be taken from the
-- current page title or the value of |pagename=; but not when called from {{pt-verb form of}}, where the
-- page title is a non-lemma form. Note that the verb spec may omit the infinitive; e.g. it may be "<i-e>".
-- For this reason, we use the value of `pagename` computed here down below, when calling normalize_all_lemmas().
local pagename = source_template ~= "pt-verb form of" and args.pagename or PAGENAME
local head = headword_head or pagename
local arg1 = args[1]
if not arg1 then
if (pagename == "pt-conj" or pagename == "pt-verb") and in_template_space() then
arg1 = "cergir<i-e,i>"
elseif pagename == "pt-verb form of" and in_template_space() then
arg1 = "amar"
else
arg1 = "<>"
end
end
-- When called from {{pt-verb form of}}, determine the non-lemma form whose inflections we're being asked to
-- determine. This normally comes from the page title or the value of |pagename=.
local verb_form_of_form
if source_template == "pt-verb form of" then
verb_form_of_form = args.pagename
if not verb_form_of_form then
if PAGENAME == "pt-verb form of" and in_template_space() then
verb_form_of_form = "ame"
else
verb_form_of_form = PAGENAME
end
end
end
local incorporated_headword_head_into_lemma = false
if arg1:find("^<.*>$") then -- missing lemma
if head:find(" ") then
-- If multiword lemma, try to add arg spec after the first word.
-- Try to preserve the brackets in the part after the verb, but don't do it
-- if there aren't the same number of left and right brackets in the verb
-- (which means the verb was linked as part of a larger expression).
local refl_clitic_verb, post = rmatch(head, "^(.-)( .*)$")
local left_brackets = rsub(refl_clitic_verb, "[^%[]", "")
local right_brackets = rsub(refl_clitic_verb, "[^%]]", "")
if #left_brackets == #right_brackets then
arg1 = iut.remove_redundant_links(refl_clitic_verb) .. arg1 .. post
incorporated_headword_head_into_lemma = true
else
-- Try again using the form without links.
local linkless_head = m_links.remove_links(head)
if linkless_head:find(" ") then
refl_clitic_verb, post = rmatch(linkless_head, "^(.-)( .*)$")
arg1 = refl_clitic_verb .. arg1 .. post
else
error("Unable to incorporate <...> spec into explicit head due to a multiword linked verb or " ..
"unbalanced brackets; please include <> explicitly: " .. arg1)
end
end
else
-- Will be incorporated through `head` below in the call to normalize_all_lemmas().
incorporated_headword_head_into_lemma = true
end
end
local function split_bracketed_runs_into_words(bracketed_runs)
return iut.split_alternating_runs(bracketed_runs, " ", "preserve splitchar")
end
local parse_props = {
parse_indicator_spec = parse_indicator_spec,
-- Split words only on spaces, not on hyphens, because that messes up reflexive verb parsing.
split_bracketed_runs_into_words = split_bracketed_runs_into_words,
allow_default_indicator = true,
allow_blank_lemma = true,
}
local alternant_multiword_spec = iut.parse_inflected_text(arg1, parse_props)
alternant_multiword_spec.pos = pos or "verbs"
alternant_multiword_spec.args = args
alternant_multiword_spec.source_template = source_template
alternant_multiword_spec.verb_form_of_form = verb_form_of_form
alternant_multiword_spec.incorporated_headword_head_into_lemma = incorporated_headword_head_into_lemma
normalize_all_lemmas(alternant_multiword_spec, head)
detect_all_indicator_specs(alternant_multiword_spec)
local inflect_props = {
slot_list = alternant_multiword_spec.all_verb_slots,
inflect_word_spec = conjugate_verb,
get_variants = function(form) return rsub(form, not_var_code_c, "") end,
-- We add links around the generated verbal forms rather than allow the entire multiword
-- expression to be a link, so ensure that user-specified links get included as well.
include_user_specified_links = true,
}
iut.inflect_multiword_or_alternant_multiword_spec(alternant_multiword_spec, inflect_props)
-- Remove redundant brackets around entire forms.
for slot, forms in pairs(alternant_multiword_spec.forms) do
for _, form in ipairs(forms) do
form.form = iut.remove_redundant_links(form.form)
end
end
compute_categories_and_annotation(alternant_multiword_spec)
if args.json and source_template == "pt-conj" then
return export.remove_variant_codes(require("Module:JSON").toJSON(alternant_multiword_spec.forms))
end
return alternant_multiword_spec
end
-- Entry point for {{pt-conj}}. Template-callable function to parse and conjugate a verb given
-- user-specified arguments and generate a displayable table of the conjugated forms.
function export.show(frame)
local parent_args = frame:getParent().args
local params = {
[1] = {},
["noautolinktext"] = {type = "boolean"},
["noautolinkverb"] = {type = "boolean"},
["pagename"] = {}, -- for testing/documentation pages
["json"] = {type = "boolean"}, -- for bot use
}
local args = require("Module:parameters").process(parent_args, params)
local alternant_multiword_spec = export.do_generate_forms(args, "pt-conj")
if type(alternant_multiword_spec) == "string" then
-- JSON return value
return alternant_multiword_spec
end
show_forms(alternant_multiword_spec)
return make_table(alternant_multiword_spec) ..
require("Module:utilities").format_categories(alternant_multiword_spec.categories, lang, nil, nil, force_cat)
end
return export
1erag2kt0j0nhmzkfklu2l1nnsa65ut
සැකිල්ල:pt-verb
10
125498
193427
2024-04-27T13:49:36Z
en>SurjectionBot
0
Protected "[[Template:pt-verb]]": (bot) automatically protect highly visible templates/modules (reference score: 2000+ >= 1000) ([Edit=Allow only autoconfirmed users] (indefinite) [Move=Allow only autoconfirmed users] (indefinite))
193427
wikitext
text/x-wiki
{{#invoke:pt-headword|show|verbs}}<!--
--><noinclude>{{documentation}}[[Category:Portuguese headword-line templates|verb]]</noinclude>
ec8cwmd3khtv5fxml8zww3ua9i4c0ys
193428
193427
2024-11-21T10:23:31Z
Lee
19
[[:en:Template:pt-verb]] වෙතින් එක් සංශෝධනයක්
193427
wikitext
text/x-wiki
{{#invoke:pt-headword|show|verbs}}<!--
--><noinclude>{{documentation}}[[Category:Portuguese headword-line templates|verb]]</noinclude>
ec8cwmd3khtv5fxml8zww3ua9i4c0ys
සැකිල්ල:pt-verb/documentation
10
125499
193429
2022-11-28T07:40:34Z
en>Benwing2
0
193429
wikitext
text/x-wiki
{{documentation subpage}}
This template generates an inflection line and categorizes [[:Category:Portuguese verbs|Portuguese verb]] entries.
==Usage==
This template should be added to all Portuguese verb entries.
The template should be placed within the Portuguese language section, immediately following the '''Verb''' L3 header.
As with other Wiktionary inflection line templates, '''please do not use <code>subst:</code>'''.
===Parameters===
The template uses the same parameter {{para|1}} as {{tl|pt-conj}}.
==Examples==
'''( 1 )''' {{m|pt|cantar||to sing}}
:<code><nowiki>{{pt-verb}}</nowiki></code>
{{pt-verb|pagename=cantar}}
----
'''( 2 )''' {{m|pt|ser||to be}}
:<code><nowiki>{{pt-verb}}</nowiki></code>
{{pt-verb|pagename=ser}}
----
'''( 3 )''' {{m|pt|conseguir||to get, to manage}}
:<code><nowiki>{{pt-verb|<i-e>}}</nowiki></code>
{{pt-verb|<i-e>|pagename=conseguir}}
----
'''( 4 )''' {{m|pt|demolir||to demolish, to destroy}}
:<code><nowiki>{{pt-verb|<no_pres1_and_sub>}}</nowiki></code>
{{pt-verb|<no_pres1_and_sub>|pagename=demolir}}
----
'''( 5 )''' {{m|pt|chover a cântaros||to [[rain cats and dogs]]}}
:<code><nowiki>{{pt-verb|chover<only3s> [[a]] [[cântaro]]s}}</nowiki></code>
{{pt-verb|chover<only3s> [[a]] [[cântaro]]s}}
----
'''( 6 )''' {{m|pt|beijar de língua||to [[French kiss]]}}
:<code><nowiki>{{pt-verb}}</nowiki></code>
{{pt-verb|pagename=beijar de língua}}
<includeonly>[[Category:Portuguese headword-line templates|verb]]</includeonly>
deldx4so60df87h5jzurfn8ouziumri
193430
193429
2024-11-21T10:23:47Z
Lee
19
[[:en:Template:pt-verb/documentation]] වෙතින් එක් සංශෝධනයක්
193429
wikitext
text/x-wiki
{{documentation subpage}}
This template generates an inflection line and categorizes [[:Category:Portuguese verbs|Portuguese verb]] entries.
==Usage==
This template should be added to all Portuguese verb entries.
The template should be placed within the Portuguese language section, immediately following the '''Verb''' L3 header.
As with other Wiktionary inflection line templates, '''please do not use <code>subst:</code>'''.
===Parameters===
The template uses the same parameter {{para|1}} as {{tl|pt-conj}}.
==Examples==
'''( 1 )''' {{m|pt|cantar||to sing}}
:<code><nowiki>{{pt-verb}}</nowiki></code>
{{pt-verb|pagename=cantar}}
----
'''( 2 )''' {{m|pt|ser||to be}}
:<code><nowiki>{{pt-verb}}</nowiki></code>
{{pt-verb|pagename=ser}}
----
'''( 3 )''' {{m|pt|conseguir||to get, to manage}}
:<code><nowiki>{{pt-verb|<i-e>}}</nowiki></code>
{{pt-verb|<i-e>|pagename=conseguir}}
----
'''( 4 )''' {{m|pt|demolir||to demolish, to destroy}}
:<code><nowiki>{{pt-verb|<no_pres1_and_sub>}}</nowiki></code>
{{pt-verb|<no_pres1_and_sub>|pagename=demolir}}
----
'''( 5 )''' {{m|pt|chover a cântaros||to [[rain cats and dogs]]}}
:<code><nowiki>{{pt-verb|chover<only3s> [[a]] [[cântaro]]s}}</nowiki></code>
{{pt-verb|chover<only3s> [[a]] [[cântaro]]s}}
----
'''( 6 )''' {{m|pt|beijar de língua||to [[French kiss]]}}
:<code><nowiki>{{pt-verb}}</nowiki></code>
{{pt-verb|pagename=beijar de língua}}
<includeonly>[[Category:Portuguese headword-line templates|verb]]</includeonly>
deldx4so60df87h5jzurfn8ouziumri
සැකිල්ල:pt-conj
10
125500
193431
2024-04-27T13:49:24Z
en>SurjectionBot
0
Protected "[[Template:pt-conj]]": (bot) automatically protect highly visible templates/modules (reference score: 1998+ >= 1000) ([Edit=Allow only autoconfirmed users] (indefinite) [Move=Allow only autoconfirmed users] (indefinite))
193431
wikitext
text/x-wiki
{{#invoke:pt-verb|show}}<!--
--><noinclude>{{documentation}}</noinclude>
hm3030e0u4c9fooo1pjfdt1iqv4toij
193432
193431
2024-11-21T10:23:59Z
Lee
19
[[:en:Template:pt-conj]] වෙතින් එක් සංශෝධනයක්
193431
wikitext
text/x-wiki
{{#invoke:pt-verb|show}}<!--
--><noinclude>{{documentation}}</noinclude>
hm3030e0u4c9fooo1pjfdt1iqv4toij
සැකිල්ල:pt-conj/documentation
10
125501
193433
2022-12-03T19:35:05Z
en>Benwing2
0
only3s
193433
wikitext
text/x-wiki
{{documentation subpage}}
This template generates a navigation box for [[:Category:Portuguese verbs|Portuguese verb]] conjugation entries. The actual work is done by [[Module:pt-verb]].
==Usage==
This template should be added to all Portuguese verb entries.
The template should be placed within the Portuguese language section, immediately following the '''Conjugation''' L4 language header.
As with other Wiktionary navigation box templates, '''please do not use <code>subst:</code>'''.
===Parameters===
The template uses one unnamed parameter to specify any information not automatically inferrable from the infinitive form.
==Examples==
'''( 1 )''' {{m|pt|cantar||to sing}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=cantar}}
'''NOTE:''' For most verbs, no parameters are needed.
----
'''( 2 )''' {{m|pt|ser||to be}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=ser}}
'''NOTE:''' The module knows how to handle all irregular verbs automatically, including prefixed derivatives such as {{m|pt|desdar}} (from {{m|pt|dar}}) and {{m|pt|abster}} (from {{m|pt|ter}}).
----
'''( 3 )''' {{m|pt|desfazer||to undo}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=desfazer}}
'''NOTE:''' Example of a prefixed verb handled automatically.
----
'''( 4 )''' {{m|pt|semear||to sow}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=semear}}
'''NOTE:''' The alternation between ''-ear'' and stressed ''-eio'' is predictable, hence automatic.
----
'''( 5 )''' {{m|pt|conseguir||to get, to manage}}
:<code><nowiki>{{pt-conj|<i-e>}}</nowiki></code>
{{pt-conj|<i-e>|pagename=conseguir}}
'''NOTE:''' The vowel alternation between ''conseguir'', ''consigo'' and ''consegue'' is unpredictable, hence a vowel alternation indicator <code><i-e></code> must be given. In general, indicators are contained between angle brackets, and if more than one must be given, they are separated by periods/full stops (a <code>.</code> symbol). Note however that the consonant alternation between ''-g-'' and ''-gu-'' is predictable, hence automatic.
----
'''( 6 )''' {{m|pt|renhir||to fight, to argue}}
:<code><nowiki>{{pt-conj|<no_pres_stressed>}}</nowiki></code>
{{pt-conj|<no_pres_stressed>|pagename=renhir}}
'''NOTE:''' This verb is defective, missing all stem-stressed forms in the present indicative and imperative (as well as the entire present subjunctive). This is specified using the indicator <code><no_pres_stressed></code>.
----
'''( 7 )''' {{m|pt|demolir||to demolish, to knock down}}
:<code><nowiki>{{pt-conj|<no_pres1_and_sub>}}</nowiki></code>
{{pt-conj|<no_pres1_and_sub>|pagename=demolir}}
'''NOTE:''' This verb is defective in a different way than {{m|pt|renhir}}, missing the first-singular present indicative and the entire present subjunctive. This is specified using the indicator <code><no_pres1_and_sub></code>.
----
'''( 8 )''' {{m|pt|chover a cântaros||to [[rain cats and dogs]]}}
:<code><nowiki>{{pt-conj|chover<only3s> [[a]] [[cântaro]]s}}</nowiki></code>
{{pt-conj|chover<only3s> [[a]] [[cântaro]]s}}
'''NOTE:''' Full support is available for multiword expressions. Put the angle-bracket spec after the verb or verbs needing to be conjugated. Remaining text is passed through unaltered, and can include links, as shown. Here, the indicator <code>only3s</code> specifies an impersonal ("only third-singular") verb.
----
'''( 9 )''' {{m|pt|dar nome aos bois||to [[name names]]|lit=to name the cows}}
:<code><nowiki>{{pt-conj|dar<> [[nome]] [[aos]] [[boi]]s}}</nowiki></code>
{{pt-conj|dar<> [[nome]] [[aos]] [[boi]]s}}
'''NOTE:''' In a multiword expression where no indicators are needed, place empty angle brackets after the verb or verbs needing to be conjugated.
----
'''( 10 )''' {{m|pt|beijar de língua||to [[French kiss]]}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=beijar de língua}}
'''NOTE:''' In a multiword expression where no indicators are needed and all words can be linked directly, {{para|1}} can be omitted. This is equivalent to placing <code><></code> after the first word, and all remaining words are automatically linked.
----
'''( 11 )''' {{m|pt|chuviscar||to [[drizzle]]}}
:<code><nowiki>{{pt-conj|<only3s>}}</nowiki></code>
{{pt-conj|<only3s>|pagename=chuviscar}}
'''NOTE:''' This verb is impersonal and includes only third-person singular forms. For third-person only verbs, use <code><only3sp></code>, and for third-plural-only verbs, use <code><only3p></code>.
<includeonly>
[[Category:Portuguese verb inflection-table templates|*]]
[[fr:Modèle:pt-conj]]
[[pt:Predefinição:conj.pt.ar]]
</includeonly>
fmydetamv3x7ki009hmclt4j68kwi48
193434
193433
2024-11-21T10:24:17Z
Lee
19
[[:en:Template:pt-conj/documentation]] වෙතින් එක් සංශෝධනයක්
193433
wikitext
text/x-wiki
{{documentation subpage}}
This template generates a navigation box for [[:Category:Portuguese verbs|Portuguese verb]] conjugation entries. The actual work is done by [[Module:pt-verb]].
==Usage==
This template should be added to all Portuguese verb entries.
The template should be placed within the Portuguese language section, immediately following the '''Conjugation''' L4 language header.
As with other Wiktionary navigation box templates, '''please do not use <code>subst:</code>'''.
===Parameters===
The template uses one unnamed parameter to specify any information not automatically inferrable from the infinitive form.
==Examples==
'''( 1 )''' {{m|pt|cantar||to sing}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=cantar}}
'''NOTE:''' For most verbs, no parameters are needed.
----
'''( 2 )''' {{m|pt|ser||to be}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=ser}}
'''NOTE:''' The module knows how to handle all irregular verbs automatically, including prefixed derivatives such as {{m|pt|desdar}} (from {{m|pt|dar}}) and {{m|pt|abster}} (from {{m|pt|ter}}).
----
'''( 3 )''' {{m|pt|desfazer||to undo}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=desfazer}}
'''NOTE:''' Example of a prefixed verb handled automatically.
----
'''( 4 )''' {{m|pt|semear||to sow}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=semear}}
'''NOTE:''' The alternation between ''-ear'' and stressed ''-eio'' is predictable, hence automatic.
----
'''( 5 )''' {{m|pt|conseguir||to get, to manage}}
:<code><nowiki>{{pt-conj|<i-e>}}</nowiki></code>
{{pt-conj|<i-e>|pagename=conseguir}}
'''NOTE:''' The vowel alternation between ''conseguir'', ''consigo'' and ''consegue'' is unpredictable, hence a vowel alternation indicator <code><i-e></code> must be given. In general, indicators are contained between angle brackets, and if more than one must be given, they are separated by periods/full stops (a <code>.</code> symbol). Note however that the consonant alternation between ''-g-'' and ''-gu-'' is predictable, hence automatic.
----
'''( 6 )''' {{m|pt|renhir||to fight, to argue}}
:<code><nowiki>{{pt-conj|<no_pres_stressed>}}</nowiki></code>
{{pt-conj|<no_pres_stressed>|pagename=renhir}}
'''NOTE:''' This verb is defective, missing all stem-stressed forms in the present indicative and imperative (as well as the entire present subjunctive). This is specified using the indicator <code><no_pres_stressed></code>.
----
'''( 7 )''' {{m|pt|demolir||to demolish, to knock down}}
:<code><nowiki>{{pt-conj|<no_pres1_and_sub>}}</nowiki></code>
{{pt-conj|<no_pres1_and_sub>|pagename=demolir}}
'''NOTE:''' This verb is defective in a different way than {{m|pt|renhir}}, missing the first-singular present indicative and the entire present subjunctive. This is specified using the indicator <code><no_pres1_and_sub></code>.
----
'''( 8 )''' {{m|pt|chover a cântaros||to [[rain cats and dogs]]}}
:<code><nowiki>{{pt-conj|chover<only3s> [[a]] [[cântaro]]s}}</nowiki></code>
{{pt-conj|chover<only3s> [[a]] [[cântaro]]s}}
'''NOTE:''' Full support is available for multiword expressions. Put the angle-bracket spec after the verb or verbs needing to be conjugated. Remaining text is passed through unaltered, and can include links, as shown. Here, the indicator <code>only3s</code> specifies an impersonal ("only third-singular") verb.
----
'''( 9 )''' {{m|pt|dar nome aos bois||to [[name names]]|lit=to name the cows}}
:<code><nowiki>{{pt-conj|dar<> [[nome]] [[aos]] [[boi]]s}}</nowiki></code>
{{pt-conj|dar<> [[nome]] [[aos]] [[boi]]s}}
'''NOTE:''' In a multiword expression where no indicators are needed, place empty angle brackets after the verb or verbs needing to be conjugated.
----
'''( 10 )''' {{m|pt|beijar de língua||to [[French kiss]]}}
:<code><nowiki>{{pt-conj}}</nowiki></code>
{{pt-conj|pagename=beijar de língua}}
'''NOTE:''' In a multiword expression where no indicators are needed and all words can be linked directly, {{para|1}} can be omitted. This is equivalent to placing <code><></code> after the first word, and all remaining words are automatically linked.
----
'''( 11 )''' {{m|pt|chuviscar||to [[drizzle]]}}
:<code><nowiki>{{pt-conj|<only3s>}}</nowiki></code>
{{pt-conj|<only3s>|pagename=chuviscar}}
'''NOTE:''' This verb is impersonal and includes only third-person singular forms. For third-person only verbs, use <code><only3sp></code>, and for third-plural-only verbs, use <code><only3p></code>.
<includeonly>
[[Category:Portuguese verb inflection-table templates|*]]
[[fr:Modèle:pt-conj]]
[[pt:Predefinição:conj.pt.ar]]
</includeonly>
fmydetamv3x7ki009hmclt4j68kwi48
සැකිල්ල:pt-conj/sandbox
10
125502
193435
2021-06-17T14:56:37Z
en>Capmo
0
Created page with "<includeonly>{{#invoke:pt-conj/sandbox|show}}</includeonly>"
193435
wikitext
text/x-wiki
<includeonly>{{#invoke:pt-conj/sandbox|show}}</includeonly>
0mvybisb11ifzeblt2vr9kvs58fyzz8
193436
193435
2024-11-21T10:24:22Z
Lee
19
[[:en:Template:pt-conj/sandbox]] වෙතින් එක් සංශෝධනයක්
193435
wikitext
text/x-wiki
<includeonly>{{#invoke:pt-conj/sandbox|show}}</includeonly>
0mvybisb11ifzeblt2vr9kvs58fyzz8
Module:typing-aids/data/grc
828
125503
193440
2024-04-05T04:14:37Z
en>Theknightwho
0
Use faster implementation of mw.ustring.char.
193440
Scribunto
text/plain
local U = require("Module:string/char")
local data = {
{
["a"] = "α",
["b"] = "β",
["c"] = "ξ",
["d"] = "δ",
["e"] = "ε",
["f"] = "φ",
["g"] = "γ",
["h"] = "η",
["([^_])i"] = "%1ι",
["^i"] = "ι",
["k"] = "κ",
["l"] = "λ",
["m"] = "μ",
["n"] = "ν",
["o"] = "ο",
["p"] = "π",
["q"] = "θ",
["r"] = "ρ",
["s"] = "σ",
["t"] = "τ",
["u"] = "υ",
["v"] = "ϝ",
["w"] = "ω",
["x"] = "χ",
["y"] = "ψ",
["z"] = "ζ",
["A"] = "Α",
["B"] = "Β",
["C"] = "Ξ",
["D"] = "Δ",
["E"] = "Ε",
["F"] = "Φ",
["G"] = "Γ",
["H"] = "Η",
["I"] = "Ι",
["K"] = "Κ",
["L"] = "Λ",
["M"] = "Μ",
["N"] = "Ν",
["O"] = "Ο",
["P"] = "Π",
["Q"] = "Θ",
["R"] = "Ρ",
["S"] = "Σ",
["T"] = "Τ",
["U"] = "Υ",
["V"] = "Ϝ",
["W"] = "Ω",
["X"] = "Χ",
["Y"] = "Ψ",
["Z"] = "Ζ",
["_i"] = U(0x345), -- iota subscript (ypogegrammeni)
["_"] = U(0x304), -- macron
[U(0xAF)] = U(0x304), -- non-combining macron
[U(0x2C9)] = U(0x304), -- modifier letter macron
["%^"] = U(0x306), -- breve
["˘"] = U(0x306), -- non-combining breve
["%+"] = U(0x308), -- diaeresis
["%("] = U(0x314), -- rough breathing (reversed comma)
["%)"] = U(0x313), -- smooth breathing (comma)
["/"] = U(0x301), -- acute
["\\"] = U(0x300), -- grave
["="] = U(0x342), -- Greek circumflex (perispomeni)
["~"] = U(0x342),
["{{=}}"] = U(0x342),
["'"] = "’", -- right single quotation mark (curly apostrophe)
["ϑ"] = "θ",
["ϰ"] = "κ",
["ϱ"] = "ρ",
["ϕ"] = "φ",
},
{
["σ%f[%s%p%z]"] = "ς",
},
{
["ς%*"] = "σ", -- used to block conversion to final sigma
["ς%-"] = "σ-", -- used to block conversion to final sigma
["!"] = "|",
["%?"] = U(0x37E), -- Greek question mark
[";"] = "·", -- interpunct
["^" .. U(0x314)] = "῾", -- spacing rough breathing
["^" .. U(0x313)] = "᾿", -- spacing smooth breathing
},
}
return data
8glul655101tfej11zzxawsyd8z6ze6
193441
193440
2024-11-21T10:29:26Z
Lee
19
[[:en:Module:typing-aids/data/grc]] වෙතින් එක් සංශෝධනයක්
193440
Scribunto
text/plain
local U = require("Module:string/char")
local data = {
{
["a"] = "α",
["b"] = "β",
["c"] = "ξ",
["d"] = "δ",
["e"] = "ε",
["f"] = "φ",
["g"] = "γ",
["h"] = "η",
["([^_])i"] = "%1ι",
["^i"] = "ι",
["k"] = "κ",
["l"] = "λ",
["m"] = "μ",
["n"] = "ν",
["o"] = "ο",
["p"] = "π",
["q"] = "θ",
["r"] = "ρ",
["s"] = "σ",
["t"] = "τ",
["u"] = "υ",
["v"] = "ϝ",
["w"] = "ω",
["x"] = "χ",
["y"] = "ψ",
["z"] = "ζ",
["A"] = "Α",
["B"] = "Β",
["C"] = "Ξ",
["D"] = "Δ",
["E"] = "Ε",
["F"] = "Φ",
["G"] = "Γ",
["H"] = "Η",
["I"] = "Ι",
["K"] = "Κ",
["L"] = "Λ",
["M"] = "Μ",
["N"] = "Ν",
["O"] = "Ο",
["P"] = "Π",
["Q"] = "Θ",
["R"] = "Ρ",
["S"] = "Σ",
["T"] = "Τ",
["U"] = "Υ",
["V"] = "Ϝ",
["W"] = "Ω",
["X"] = "Χ",
["Y"] = "Ψ",
["Z"] = "Ζ",
["_i"] = U(0x345), -- iota subscript (ypogegrammeni)
["_"] = U(0x304), -- macron
[U(0xAF)] = U(0x304), -- non-combining macron
[U(0x2C9)] = U(0x304), -- modifier letter macron
["%^"] = U(0x306), -- breve
["˘"] = U(0x306), -- non-combining breve
["%+"] = U(0x308), -- diaeresis
["%("] = U(0x314), -- rough breathing (reversed comma)
["%)"] = U(0x313), -- smooth breathing (comma)
["/"] = U(0x301), -- acute
["\\"] = U(0x300), -- grave
["="] = U(0x342), -- Greek circumflex (perispomeni)
["~"] = U(0x342),
["{{=}}"] = U(0x342),
["'"] = "’", -- right single quotation mark (curly apostrophe)
["ϑ"] = "θ",
["ϰ"] = "κ",
["ϱ"] = "ρ",
["ϕ"] = "φ",
},
{
["σ%f[%s%p%z]"] = "ς",
},
{
["ς%*"] = "σ", -- used to block conversion to final sigma
["ς%-"] = "σ-", -- used to block conversion to final sigma
["!"] = "|",
["%?"] = U(0x37E), -- Greek question mark
[";"] = "·", -- interpunct
["^" .. U(0x314)] = "῾", -- spacing rough breathing
["^" .. U(0x313)] = "᾿", -- spacing smooth breathing
},
}
return data
8glul655101tfej11zzxawsyd8z6ze6
Module:typing-aids/data helpers
828
125504
193454
2021-03-17T22:53:31Z
en>Erutuon
0
function from [[Module:typing-aids/data/Armi]]
193454
Scribunto
text/plain
local export = {}
function export.split_single_and_multi_char(repls)
local processed = {}
local single_char = {}
local multi_char = {}
processed[1] = multi_char
processed[2] = single_char
local decompose, ulen = mw.ustring.toNFD, mw.ustring.len
for pat, repl in pairs(repls) do
pat = decompose(pat)
if ulen(pat) == 1 then
single_char[pat] = repl
else
multi_char[pat] = repl
end
end
return processed
end
return export
nfpcg3oaik4unv4ywf0dcfwrmukpou0
193455
193454
2024-11-21T10:34:13Z
Lee
19
[[:en:Module:typing-aids/data_helpers]] වෙතින් එක් සංශෝධනයක්
193454
Scribunto
text/plain
local export = {}
function export.split_single_and_multi_char(repls)
local processed = {}
local single_char = {}
local multi_char = {}
processed[1] = multi_char
processed[2] = single_char
local decompose, ulen = mw.ustring.toNFD, mw.ustring.len
for pat, repl in pairs(repls) do
pat = decompose(pat)
if ulen(pat) == 1 then
single_char[pat] = repl
else
multi_char[pat] = repl
end
end
return processed
end
return export
nfpcg3oaik4unv4ywf0dcfwrmukpou0
Module:typing-aids/documentation
828
125505
193456
2024-03-08T23:23:07Z
en>Benwing2
0
use {{module cat}}
193456
wikitext
text/x-wiki
This module is invoked by {{temp|chars}} (and {{temp|chars/example}}). It replaces ASCII keyboard shortcuts with characters used in various languages.
To edit the list of shortcuts, see [[Module:typing-aids/data]].
==Testcases==
{{#invoke:typing-aids/testcases|run_tests}}
{{module cat|grc,ar,ae,hit,gem,grk,ine|Character insertion,Template interface}}
9j4qrtsz3ryt0c8976zwyiqp4o8gtl9
193457
193456
2024-11-21T10:34:55Z
Lee
19
[[:en:Module:typing-aids/documentation]] වෙතින් එක් සංශෝධනයක්
193456
wikitext
text/x-wiki
This module is invoked by {{temp|chars}} (and {{temp|chars/example}}). It replaces ASCII keyboard shortcuts with characters used in various languages.
To edit the list of shortcuts, see [[Module:typing-aids/data]].
==Testcases==
{{#invoke:typing-aids/testcases|run_tests}}
{{module cat|grc,ar,ae,hit,gem,grk,ine|Character insertion,Template interface}}
9j4qrtsz3ryt0c8976zwyiqp4o8gtl9
Module:typing-aids/testcases
828
125506
193458
2024-08-15T16:07:05Z
en>Kutchkutch
0
193458
Scribunto
text/plain
local tests = require('Module:UnitTests')
local m_typing = require('Module:typing-aids')
local get_by_code = require('Module:languages').getByCode
local decompose = mw.ustring.toNFD
local langs = {}
local tag_funcs = {}
-- Assumes one script per language.
local function tag_gen(test_text, langCode)
local func = tag_funcs[langCode]
if func then
return func
else
if not langs[langCode] then
langs[langCode] = get_by_code(langCode) or error('The language code ' .. langCode .. ' is invalid.')
end
local scCode = langs[langCode]:findBestScript(test_text):getCode() or
error('No script could be found for the text ' .. test_text .. ' and the language code ' .. langCode .. '.')
local before, after = '<span class="' .. scCode .. '" lang="' .. langCode .. '">', '</span>'
function func(text)
return before .. text .. after
end
tag_funcs[langCode] = func
return func
end
end
local options_cache = {}
function tests:check_output(code, expected, lang, transliteration, sc)
local result
if lang then
result = m_typing.replace{ lang, code, sc = sc }
else
result = m_typing.replace{code, sc = sc}
end
result = decompose(result)
expected = decompose(expected)
local options = options_cache[lang]
if not options and lang and not transliteration then
options = { display = tag_gen(result, lang) }
options_cache[lang] = options
end
self:equals(
code,
result,
expected,
options
)
end
function tests:do_tests(examples, lang, sc)
local transliteration = lang ~= nil and lang:find("%-tr$") ~= nil
for _, example in ipairs(examples) do
if #example == 3 and not transliteration then
self:check_output(example[1], example[3], lang, nil, sc)
if example[2] ~= example[1] then
self:check_output(example[2], example[3], lang, nil, sc)
end
else
self:check_output(example[1], example[2], lang, transliteration, sc)
end
end
end
function tests:test_all()
local examples = {
{ "*dye_'ws", "*dyḗws" },
{ "*n0mr0to's", "*n̥mr̥tós" },
{ "*tk'e'yti", "*tḱéyti" },
{ "*h1es-", "*h₁es-" },
{ "*t_ep-e'h1(ye)-ti", "*tₔp-éh₁(ye)-ti" },
{ "*h1e'k'wos", "*h₁éḱwos" },
{ "*bhebho'ydhe", "*bʰebʰóydʰe" },
{ "*dh3to's", "*dh₃tós" },
{ "*t'a_ko^`", "*þākǫ̂" },
{ "*T'eudo_balt'az", "*Þeudōbalþaz" },
{ "*bo_kijo_`", "*bōkijǭ" },
{ "*tat^t^o_", "*taťťō" },
{ "*d^o_'yyon", "*ďṓyyon" },
}
self:do_tests(examples)
end
local ae_examples = {
{ "ap", "ap", "𐬀𐬞" },
{ "xs.^uuas^", "xṣ̌uuaš", "𐬑𐬴𐬎𐬎𐬀𐬱" },
{ "v@hrka_na", "vəhrkāna", "𐬬𐬆𐬵𐬭𐬐𐬁𐬥𐬀" },
{ "nae_za", "naēza", "𐬥𐬀𐬉𐬰𐬀" },
{ "zaaO", "zā̊", "𐬰𐬃"},
{ "hizwaO", "hizuuå", "𐬵𐬌𐬰𐬎𐬎𐬂"},
}
function tests:test_Avestan()
self:do_tests(ae_examples, "ae")
end
function tests:test_Avestan_tr()
self:do_tests(ae_examples, "ae-tr")
end
function tests:test_Akkadian()
local examples = {
{ "ša", "𒊭" },
-- { "transliteration", "result" },
}
self:do_tests(examples, "akk")
end
local hy_examples = {
{ "azgaynac`um", "azgaynacʿum", "ազգայնացում" },
{ "terew", "terew", "տերև" },
{ "burz^uazia", "buržuazia", "բուրժուազիա" },
{ "kol_mnaki", "kołmnaki", "կողմնակի" },
}
function tests:test_Armenian()
self:do_tests(hy_examples, "hy")
end
function tests:test_Armenian_tr()
self:do_tests(hy_examples, "hy-tr")
end
function tests:test_Arabic()
local examples = {
{ "al-Huruuf al-qamariyyat'", "الْحُرُوف الْقَمَرِيَّة" },
{ "al-Huruuf al-xamsiyyat'", "الْحُرُوف الشَّمْسِيَّة" },
{ "ealifu WlwaSli", "أَلِفُ ٱلْوَصْلِ" },
{ "maae", "مَاء" },
{ "muemin", "مُؤْمِن" },
{ "eiDaafat'", "إِضَافَة" },
{ "eaab", "آب" },
{ "qureaan", "قُرْآن" },
{ "qiTTat'", "قِطَّة" },
{ "faEEaal", "فَعَّال" },
{ "xayeu", "شَيْءُ" },
{ "xayeaN", "شَيْءً" },
{ "daaeimaN", "دَائِمًا" },
{ "mabduueat'", "مَبْدُوءَة" },
{ "mabduu'at'", "مَبْدُوءَة" },
{ "badaaeiyyuN", "بَدَائِيٌّ" },
{ "badaaeat'", "بَدَاءَة" },
{ "maktuub", "مَكْتُوب" },
{ "taHriir", "تَحْرِير" },
{ "EuZmaaa", "عُظْمَى" },
{ "ean0", "أَنْ" },
{ "law0", "لَوْ" },
{ "xay'aN", "شَيْءً" },
{ "ta7riir", "تَحْرِير" },
{ "3axarat'", "عَشَرَة" },
}
self:do_tests(examples, "ar")
end
function tests:test_Persian()
local examples = {
{ "brAdr", "برادر" },
}
self:do_tests(examples, "fa")
end
function tests:test_PIE()
local examples = {
{ "*dye_'ws", "*dyḗws" },
{ "*n0mr0to's", "*n̥mr̥tós" },
{ "*tk'e'yti", "*tḱéyti" },
{ "*h1es-", "*h₁es-" },
{ "*t_ep-e'h1(ye)-ti", "*tₔp-éh₁(ye)-ti" },
{ "*h1e'k'wos", "*h₁éḱwos" },
{ "*bhebho'ydhe", "*bʰebʰóydʰe" },
{ "*dh3to's", "*dh₃tós" },
{ "*dhewg'h-", "*dʰewǵʰ-" },
}
self:do_tests(examples, "ine-pro")
end
function tests:test_Germanic()
local examples = {
{ "*t'a_ko^`", "*þākǫ̂" },
{ "*T'eudo_balt'az", "*Þeudōbalþaz" },
{ "*bo_kijo_`", "*bōkijǭ" },
}
self:do_tests(examples, "gem-pro")
end
function tests:test_Gothic()
local examples = {
{ "ƕaiwa", "𐍈𐌰𐌹𐍅𐌰" },
{ "anþar", "𐌰𐌽𐌸𐌰𐍂" },
{ "fidwōr", "𐍆𐌹𐌳𐍅𐍉𐍂" },
{ "fidwor", "𐍆𐌹𐌳𐍅𐍉𐍂" },
{ "mikils", "𐌼𐌹𐌺𐌹𐌻𐍃" },
{ "hēr", "𐌷𐌴𐍂" },
{ "her", "𐌷𐌴𐍂" },
{ "vac", "𐍈𐌰𐌸" },
-- { "", "" },
}
self:do_tests(examples, "got")
end
function tests:test_Hellenic()
local examples = {
{ "*tat^t^o_", "*taťťō" },
{ "*d^o_'yyon", "*ďṓyyon" },
{ "*gw@n'n'o_", "*gʷəňňō" },
{ "*gw@n^n^o_", "*gʷəňňō" },
{ "*kwhe_r", "*kʷʰēr" },
{ "*khwe_r", "*kʷʰēr" },
}
self:do_tests(examples, "grk-pro")
end
function tests:test_Greek()
local examples = {
{ "a__i", "ᾱͅ" },
{ "a)lhqh/s", "ἀληθής" },
{ "a)lhqhs*", "ἀληθησ" },
{ "a)lhqhs-", "ἀληθησ-" },
{ "a^)nh/r", "ᾰ̓νήρ" },
{ "Phlhi+a/dhs", "Πηληϊάδης" },
{ "Phlhi^+a^/dhs", "Πηληῐ̈ᾰ́δης" },
{ "Πηληϊ^ά^δης", "Πηληῐ̈ᾰ́δης" },
{ "e)a_/n", "ἐᾱ́ν" },
{ "ἐά_ν", "ἐᾱ́ν" },
{ "pa=sa^", "πᾶσᾰ" },
{ "u_(mei=s", "ῡ̔μεῖς" },
{ "a/)^ner", "ᾰ̓́νερ" },
{ "a/^)ner", "ᾰ̓́νερ" },
{ "a)/^ner", "ᾰ̓́νερ" },
{ "a)^/ner", "ᾰ̓́νερ" },
{ "dai+/frwn", "δαΐφρων" },
{ "dai/+frwn", "δαΐφρων" },
}
self:do_tests(examples, "grc")
end
function tests:test_Hittite()
local examples = {
{ "a-ku", "𒀀𒆪" },
{ "an-tu-wa-ah-ha-as", "𒀭𒌅𒉿𒄴𒄩𒀸" },
{ "an-tu-wa-aḫ-ḫa-aš", "𒀭𒌅𒉿𒄴𒄩𒀸" },
{ "<sup>DINGIR</sup>IŠKUR", "𒀭𒅎" }, -- Akkadian actually?
}
self:do_tests(examples, "hit")
end
function tests:test_Kannada()
local examples = {
{ "yaMtra", "ಯಂತ್ರ" },
{ "sadāśiva", "ಸದಾಶಿವ" },
{ "muṣṭi", "ಮುಷ್ಟಿ" },
{ "dhairya", "ಧೈರ್ಯ" },
{ "ELu", "ಏಳು" },
{ "iMguzETiyA", "ಇಂಗುಶೇಟಿಯಾ" },
{ "upayOga", "ಉಪಯೋಗ" },
}
self:do_tests(examples, "kn")
end
local sa_examples = {
{ "saMskRta/", "saṃskṛtá", "संस्कृत" },
{ "kSatri/ya", "kṣatríya", "क्षत्रिय" },
{ "rAja suprabuddha", "rāja suprabuddha", "राज सुप्रबुद्ध"},
{ "zAkyamuni", "śākyamuni", "शाक्यमुनि"},
{ "siMha", "siṃha", "सिंह"},
{ "nAman", "nāman", "नामन्"},
{ "anA/", "anā́", "अना" },
{ "ayuSmAn", "ayuṣmān", "अयुष्मान्"},
{ "ghatsyati", "ghatsyati", "घत्स्यति"},
{ "tApa-i", "tāpa-i", "तापइ" },
{ "tApaï", "tāpaï", "तापइ" },
}
function tests:test_Sanskrit()
self:do_tests(sa_examples, "sa")
end
function tests:test_Sanskrit_tr()
self:do_tests(sa_examples, "sa-tr")
end
function tests:test_Maithili()
local examples = {
{ "maithilI", "𑒧𑒻𑒟𑒱𑒪𑒲" },
{ "ghO_r_A", "𑒒𑒼𑒛𑓃𑒰" },
{ "ga_rh_a", "𑒑𑒜𑓃" },
{ "mokAma", "𑒧𑒽𑒏𑒰𑒧" },
{ "pa~cakhaNDI", "𑒣𑒿𑒔𑒐𑒝𑓂𑒛𑒲" },
{ "heraba", "𑒯𑒺𑒩𑒥" },
}
self:do_tests(examples, "mai")
end
function tests:test_Marwari()
local examples = {
{ "mahAjanI", "𑅬𑅱𑅛𑅧𑅑" },
{ "mukAMm", "𑅬𑅒𑅕𑅧𑅬" },
{ "AvalA", "𑅐𑅯𑅮" },
{ "AgarA", "𑅐𑅗𑅭" },
{ "upama", "𑅒𑅨𑅬" },
{ "iMdaura", "𑅑𑅧𑅥𑅒𑅭" },
}
self:do_tests(examples, "mwr")
end
function tests:test_Old_Persian()
local examples = {
{ "aitiiy", "𐎠𐎡𐎫𐎡𐎹" },
{ "raucah", "𐎼𐎢𐎨𐏃" },
{ "ham", "𐏃𐎶" },
{ "jiva", "𐎪𐎺"},
{ "daraniyakara", "𐎭𐎼𐎴𐎹𐎣𐎼" },
{ "daragama", "𐎭𐎼𐎥𐎶" },
}
self:do_tests(examples, "peo")
end
function tests:test_Parthian()
local examples = {
{ "tšynd", "𐫤𐫢𐫏𐫗𐫅" },
{ "xʾrtʾg", "𐫟𐫀𐫡𐫤𐫀𐫃" },
{ "hʾmhyrz", "𐫍𐫀𐫖𐫍𐫏𐫡𐫉" },
{ "ʿšnwhr", "𐫙𐫢𐫗𐫇𐫍𐫡"},
{ "hʾwsʾr", "𐫍𐫀𐫇𐫘𐫀𐫡" },
}
self:do_tests(examples, "xpr", "Mani")
end
function tests:test_Japanese()
local examples = {
{ "iro ha nihoheto", "いろ は にほへと" },
{ "uwyi no okuyama", "うゐ の おくやま" },
{ "FAMIRI-MA-TO", "ファミリーマート" },
{ "altu", "あっ" },
{ "hi/mi/tu", "ひ・み・つ" },
{ "han'i", "はんい" },
{ "hanni", "はんい" },
{ "konnyou", "こんよう" },
{ "mannnaka", "まんなか" },
{ "attiike", "あっちいけ" },
{ "acchiike", "あっちいけ" },
{ "upnusi", "うpぬし" },
}
self:do_tests(examples, "ja")
end
function tests:test_Old_Church_Slavonic()
local examples = {
{ "ljudije", "людиѥ" },
{ "azuh", "азъ" },
{ "buky", "боукꙑ" },
{ "mŭčati", "мъчати" },
{ "Iosija", "Иосиꙗ" },
}
self:do_tests(examples, "cu")
end
local omr_examples = {
{ "kuhA", "kuhā", "𑘎𑘳𑘮𑘰" },
{ "nibara", "nibara", "𑘡𑘲𑘤𑘨" },
{ "nIbara", "nībara", "𑘡𑘲𑘤𑘨" },
{ "Ai", "āi", "𑘁𑘃" },
{ "AI", "āī", "𑘁𑘃" },
{ "suta", "suta", "𑘭𑘳𑘝" },
{ "sUta", "suta", "𑘭𑘳𑘝" },
{ "uta", "uta", "𑘄𑘝" },
{ "Uta", "uta", "𑘄𑘝" },
{ "na-i", "na-i", "𑘡𑘃" },
{ "naï", "naï", "𑘡𑘃" },
{ "a-ila", "a-ila", "𑘀𑘃𑘩" },
{ "aïla", "aïla", "𑘀𑘃𑘩" },
{ "bhavai", "bhavai", "𑘥𑘪𑘺" },
{ "cauka", "cauka", "𑘓𑘼𑘎" },
{ "ca-utha", "ca-utha", "𑘓𑘄𑘞" },
{ "caütha", "caütha", "𑘓𑘄𑘞" },
{ "a-ukSa", "a-ukṣa", "𑘀𑘄𑘎𑘿𑘬" },
{ "aükSa", "aükṣa", "𑘀𑘄𑘎𑘿𑘬" },
{ "AThoLI", "āṭhoḷī", "𑘁𑘙𑘻𑘯𑘲" },
{ "raMbhA", "raṃbhā", "𑘨𑘽𑘥𑘰" },
{ "hRdA", "hṛdā", "𑘮𑘵𑘟𑘰" },
{ "Rkha", "ṛkha", "𑘆𑘏" },
{ "SaDa", "ṣaḍa", "𑘬𑘚" },
{ "kSeNa", "kṣeṇa", "𑘎𑘿𑘬𑘹𑘜" },
{ "zobhaNe", "śobhaṇe", "𑘫𑘻𑘥𑘜𑘹" },
{ "arha", "arha", "𑘀𑘨𑘿𑘮" },
{ "mar_hATI", "maṟhāṭī", "𑘦𑘨𑘿𑘮𑘰𑘘𑘲" },
}
function tests:test_Old_Marathi()
self:do_tests(omr_examples, "omr")
end
function tests:test_Old_Marathi_tr()
self:do_tests(omr_examples, "omr-tr")
end
function tests:test_Ossetian()
local examples = {
{ "fynʒ", "фындз" },
{ "æxsæv", "ӕхсӕв" },
{ "c’æx", "цъӕх" },
{ "biræǧ", "бирӕгъ" },
{ "Ræstʒinad", "Рӕстдзинад" },
}
self:do_tests(examples, "os")
end
function tests:test_Imperial_Aramaic()
local examples = {
{ "'nḥn", "𐡀𐡍𐡇𐡍" },
}
self:do_tests(examples, "arc", "Armi")
end
function tests:test_Old_South_Arabian()
local examples = {
{ "s²ms¹", "𐩦𐩣𐩪" },
}
self:do_tests(examples, "xsa")
end
function tests:test_Siddham()
local examples = {
{ "kanta", "𑖎𑖡𑖿𑖝" },
{ "purAna", "𑖢𑖲𑖨𑖯𑖡"},
{ "Na-i", "𑖜𑖂"},
{ "kaNNa", "𑖎𑖜𑖿𑖜"},
{ "samAia", "𑖭𑖦𑖯𑖂𑖀"},
{ "tujjhu", "𑖝𑖲𑖕𑖿𑖖𑖲"},
{ "kahante", "𑖎𑖮𑖡𑖿𑖝𑖸"},
}
self:do_tests(examples, "inc-kam")
end
function tests:test_Kaithi()
local examples = {
{ "hanU", "𑂯𑂢𑂴" },
{ "pa_rh_ahi", "𑂣𑂜𑂯𑂱" },
{ "siya~", "𑂮𑂱𑂨𑂀" },
{ "jhara-i", "𑂕𑂩𑂅" },
{ "jharaï", "𑂕𑂩𑂅" },
{ "Agi", "𑂄𑂏𑂱" },
{ "āgi", "𑂄𑂏𑂱" },
}
self:do_tests(examples, "bho")
end
function tests:test_Saurashtra()
local examples = {
{ "pani", "ꢦꢥꢶ" },
{ "vAg", "ꢮꢵꢔ꣄" },
{ "ghoDo", "ꢕꣁꢞꣁ" },
{ "dukkar", "ꢣꢸꢒ꣄ꢒꢬ꣄" },
{ "l:ovo", "ꢭꢴꣁꢮꣁ" },
}
self:do_tests(examples, "saz")
end
function tests:test_Sindhi()
local examples = {
{ "siMdhī", "𑋝𑋡𑋟𑋐𑋢" },
{ "bhAGo", "𑋖𑋠𑊿𑋧" },
{ "mAlu", "𑋗𑋠𑋚𑋣" },
{ "jeko", "𑋂𑋥𑊺𑋧" },
{ "xabara", "𑊻𑋩𑋔𑋙" },
{ "muqAmu", "𑋗𑋣𑊺𑋩𑋠𑋗𑋣" },
{ "meM", "𑋗𑋥𑋟" },
{ "gunAhu", "𑊼𑋣𑋑𑋠𑋞𑋣" },
{ "_gh_araza", "𑊼𑋩𑋙𑋂𑋩" },
{ "_gh_ufA", "𑊼𑋩𑋣𑋓𑋩𑋠" },
{ "bA_gh_u", "𑋔𑋠𑊼𑋩𑋣" },
{ "ba_gh_adAdu", "𑋔𑊼𑋩𑋏𑋠𑋏𑋣" },
{ "ghaTaNu", "𑊾𑋆𑋌𑋣" },
}
self:do_tests(examples, "sd")
end
--[[
function tests:test_Old_North_Arabian()
-- We need tests to verify that letters with diacritics or modifiers
-- transliterate correctly.
local examples = {
{ "'lšdy", "𐪑𐪁𐪆𐪕𐪚" },
}
self:do_tests(examples, "sem-tha")
end
--]]
--[[
To add another example, place the following code
within the braces of an "examples" table:
{ "shortcut", "expected result" },
{ "", "" },
or for Sanskrit,
{ "Harvard-Kyoto", "IAST", "Devanagari" },
{ "", "", "" },
]]
return tests
q4p085ojx0nvfgp1kmueoc9fnd40pue
193459
193458
2024-11-21T10:35:11Z
Lee
19
[[:en:Module:typing-aids/testcases]] වෙතින් එක් සංශෝධනයක්
193458
Scribunto
text/plain
local tests = require('Module:UnitTests')
local m_typing = require('Module:typing-aids')
local get_by_code = require('Module:languages').getByCode
local decompose = mw.ustring.toNFD
local langs = {}
local tag_funcs = {}
-- Assumes one script per language.
local function tag_gen(test_text, langCode)
local func = tag_funcs[langCode]
if func then
return func
else
if not langs[langCode] then
langs[langCode] = get_by_code(langCode) or error('The language code ' .. langCode .. ' is invalid.')
end
local scCode = langs[langCode]:findBestScript(test_text):getCode() or
error('No script could be found for the text ' .. test_text .. ' and the language code ' .. langCode .. '.')
local before, after = '<span class="' .. scCode .. '" lang="' .. langCode .. '">', '</span>'
function func(text)
return before .. text .. after
end
tag_funcs[langCode] = func
return func
end
end
local options_cache = {}
function tests:check_output(code, expected, lang, transliteration, sc)
local result
if lang then
result = m_typing.replace{ lang, code, sc = sc }
else
result = m_typing.replace{code, sc = sc}
end
result = decompose(result)
expected = decompose(expected)
local options = options_cache[lang]
if not options and lang and not transliteration then
options = { display = tag_gen(result, lang) }
options_cache[lang] = options
end
self:equals(
code,
result,
expected,
options
)
end
function tests:do_tests(examples, lang, sc)
local transliteration = lang ~= nil and lang:find("%-tr$") ~= nil
for _, example in ipairs(examples) do
if #example == 3 and not transliteration then
self:check_output(example[1], example[3], lang, nil, sc)
if example[2] ~= example[1] then
self:check_output(example[2], example[3], lang, nil, sc)
end
else
self:check_output(example[1], example[2], lang, transliteration, sc)
end
end
end
function tests:test_all()
local examples = {
{ "*dye_'ws", "*dyḗws" },
{ "*n0mr0to's", "*n̥mr̥tós" },
{ "*tk'e'yti", "*tḱéyti" },
{ "*h1es-", "*h₁es-" },
{ "*t_ep-e'h1(ye)-ti", "*tₔp-éh₁(ye)-ti" },
{ "*h1e'k'wos", "*h₁éḱwos" },
{ "*bhebho'ydhe", "*bʰebʰóydʰe" },
{ "*dh3to's", "*dh₃tós" },
{ "*t'a_ko^`", "*þākǫ̂" },
{ "*T'eudo_balt'az", "*Þeudōbalþaz" },
{ "*bo_kijo_`", "*bōkijǭ" },
{ "*tat^t^o_", "*taťťō" },
{ "*d^o_'yyon", "*ďṓyyon" },
}
self:do_tests(examples)
end
local ae_examples = {
{ "ap", "ap", "𐬀𐬞" },
{ "xs.^uuas^", "xṣ̌uuaš", "𐬑𐬴𐬎𐬎𐬀𐬱" },
{ "v@hrka_na", "vəhrkāna", "𐬬𐬆𐬵𐬭𐬐𐬁𐬥𐬀" },
{ "nae_za", "naēza", "𐬥𐬀𐬉𐬰𐬀" },
{ "zaaO", "zā̊", "𐬰𐬃"},
{ "hizwaO", "hizuuå", "𐬵𐬌𐬰𐬎𐬎𐬂"},
}
function tests:test_Avestan()
self:do_tests(ae_examples, "ae")
end
function tests:test_Avestan_tr()
self:do_tests(ae_examples, "ae-tr")
end
function tests:test_Akkadian()
local examples = {
{ "ša", "𒊭" },
-- { "transliteration", "result" },
}
self:do_tests(examples, "akk")
end
local hy_examples = {
{ "azgaynac`um", "azgaynacʿum", "ազգայնացում" },
{ "terew", "terew", "տերև" },
{ "burz^uazia", "buržuazia", "բուրժուազիա" },
{ "kol_mnaki", "kołmnaki", "կողմնակի" },
}
function tests:test_Armenian()
self:do_tests(hy_examples, "hy")
end
function tests:test_Armenian_tr()
self:do_tests(hy_examples, "hy-tr")
end
function tests:test_Arabic()
local examples = {
{ "al-Huruuf al-qamariyyat'", "الْحُرُوف الْقَمَرِيَّة" },
{ "al-Huruuf al-xamsiyyat'", "الْحُرُوف الشَّمْسِيَّة" },
{ "ealifu WlwaSli", "أَلِفُ ٱلْوَصْلِ" },
{ "maae", "مَاء" },
{ "muemin", "مُؤْمِن" },
{ "eiDaafat'", "إِضَافَة" },
{ "eaab", "آب" },
{ "qureaan", "قُرْآن" },
{ "qiTTat'", "قِطَّة" },
{ "faEEaal", "فَعَّال" },
{ "xayeu", "شَيْءُ" },
{ "xayeaN", "شَيْءً" },
{ "daaeimaN", "دَائِمًا" },
{ "mabduueat'", "مَبْدُوءَة" },
{ "mabduu'at'", "مَبْدُوءَة" },
{ "badaaeiyyuN", "بَدَائِيٌّ" },
{ "badaaeat'", "بَدَاءَة" },
{ "maktuub", "مَكْتُوب" },
{ "taHriir", "تَحْرِير" },
{ "EuZmaaa", "عُظْمَى" },
{ "ean0", "أَنْ" },
{ "law0", "لَوْ" },
{ "xay'aN", "شَيْءً" },
{ "ta7riir", "تَحْرِير" },
{ "3axarat'", "عَشَرَة" },
}
self:do_tests(examples, "ar")
end
function tests:test_Persian()
local examples = {
{ "brAdr", "برادر" },
}
self:do_tests(examples, "fa")
end
function tests:test_PIE()
local examples = {
{ "*dye_'ws", "*dyḗws" },
{ "*n0mr0to's", "*n̥mr̥tós" },
{ "*tk'e'yti", "*tḱéyti" },
{ "*h1es-", "*h₁es-" },
{ "*t_ep-e'h1(ye)-ti", "*tₔp-éh₁(ye)-ti" },
{ "*h1e'k'wos", "*h₁éḱwos" },
{ "*bhebho'ydhe", "*bʰebʰóydʰe" },
{ "*dh3to's", "*dh₃tós" },
{ "*dhewg'h-", "*dʰewǵʰ-" },
}
self:do_tests(examples, "ine-pro")
end
function tests:test_Germanic()
local examples = {
{ "*t'a_ko^`", "*þākǫ̂" },
{ "*T'eudo_balt'az", "*Þeudōbalþaz" },
{ "*bo_kijo_`", "*bōkijǭ" },
}
self:do_tests(examples, "gem-pro")
end
function tests:test_Gothic()
local examples = {
{ "ƕaiwa", "𐍈𐌰𐌹𐍅𐌰" },
{ "anþar", "𐌰𐌽𐌸𐌰𐍂" },
{ "fidwōr", "𐍆𐌹𐌳𐍅𐍉𐍂" },
{ "fidwor", "𐍆𐌹𐌳𐍅𐍉𐍂" },
{ "mikils", "𐌼𐌹𐌺𐌹𐌻𐍃" },
{ "hēr", "𐌷𐌴𐍂" },
{ "her", "𐌷𐌴𐍂" },
{ "vac", "𐍈𐌰𐌸" },
-- { "", "" },
}
self:do_tests(examples, "got")
end
function tests:test_Hellenic()
local examples = {
{ "*tat^t^o_", "*taťťō" },
{ "*d^o_'yyon", "*ďṓyyon" },
{ "*gw@n'n'o_", "*gʷəňňō" },
{ "*gw@n^n^o_", "*gʷəňňō" },
{ "*kwhe_r", "*kʷʰēr" },
{ "*khwe_r", "*kʷʰēr" },
}
self:do_tests(examples, "grk-pro")
end
function tests:test_Greek()
local examples = {
{ "a__i", "ᾱͅ" },
{ "a)lhqh/s", "ἀληθής" },
{ "a)lhqhs*", "ἀληθησ" },
{ "a)lhqhs-", "ἀληθησ-" },
{ "a^)nh/r", "ᾰ̓νήρ" },
{ "Phlhi+a/dhs", "Πηληϊάδης" },
{ "Phlhi^+a^/dhs", "Πηληῐ̈ᾰ́δης" },
{ "Πηληϊ^ά^δης", "Πηληῐ̈ᾰ́δης" },
{ "e)a_/n", "ἐᾱ́ν" },
{ "ἐά_ν", "ἐᾱ́ν" },
{ "pa=sa^", "πᾶσᾰ" },
{ "u_(mei=s", "ῡ̔μεῖς" },
{ "a/)^ner", "ᾰ̓́νερ" },
{ "a/^)ner", "ᾰ̓́νερ" },
{ "a)/^ner", "ᾰ̓́νερ" },
{ "a)^/ner", "ᾰ̓́νερ" },
{ "dai+/frwn", "δαΐφρων" },
{ "dai/+frwn", "δαΐφρων" },
}
self:do_tests(examples, "grc")
end
function tests:test_Hittite()
local examples = {
{ "a-ku", "𒀀𒆪" },
{ "an-tu-wa-ah-ha-as", "𒀭𒌅𒉿𒄴𒄩𒀸" },
{ "an-tu-wa-aḫ-ḫa-aš", "𒀭𒌅𒉿𒄴𒄩𒀸" },
{ "<sup>DINGIR</sup>IŠKUR", "𒀭𒅎" }, -- Akkadian actually?
}
self:do_tests(examples, "hit")
end
function tests:test_Kannada()
local examples = {
{ "yaMtra", "ಯಂತ್ರ" },
{ "sadāśiva", "ಸದಾಶಿವ" },
{ "muṣṭi", "ಮುಷ್ಟಿ" },
{ "dhairya", "ಧೈರ್ಯ" },
{ "ELu", "ಏಳು" },
{ "iMguzETiyA", "ಇಂಗುಶೇಟಿಯಾ" },
{ "upayOga", "ಉಪಯೋಗ" },
}
self:do_tests(examples, "kn")
end
local sa_examples = {
{ "saMskRta/", "saṃskṛtá", "संस्कृत" },
{ "kSatri/ya", "kṣatríya", "क्षत्रिय" },
{ "rAja suprabuddha", "rāja suprabuddha", "राज सुप्रबुद्ध"},
{ "zAkyamuni", "śākyamuni", "शाक्यमुनि"},
{ "siMha", "siṃha", "सिंह"},
{ "nAman", "nāman", "नामन्"},
{ "anA/", "anā́", "अना" },
{ "ayuSmAn", "ayuṣmān", "अयुष्मान्"},
{ "ghatsyati", "ghatsyati", "घत्स्यति"},
{ "tApa-i", "tāpa-i", "तापइ" },
{ "tApaï", "tāpaï", "तापइ" },
}
function tests:test_Sanskrit()
self:do_tests(sa_examples, "sa")
end
function tests:test_Sanskrit_tr()
self:do_tests(sa_examples, "sa-tr")
end
function tests:test_Maithili()
local examples = {
{ "maithilI", "𑒧𑒻𑒟𑒱𑒪𑒲" },
{ "ghO_r_A", "𑒒𑒼𑒛𑓃𑒰" },
{ "ga_rh_a", "𑒑𑒜𑓃" },
{ "mokAma", "𑒧𑒽𑒏𑒰𑒧" },
{ "pa~cakhaNDI", "𑒣𑒿𑒔𑒐𑒝𑓂𑒛𑒲" },
{ "heraba", "𑒯𑒺𑒩𑒥" },
}
self:do_tests(examples, "mai")
end
function tests:test_Marwari()
local examples = {
{ "mahAjanI", "𑅬𑅱𑅛𑅧𑅑" },
{ "mukAMm", "𑅬𑅒𑅕𑅧𑅬" },
{ "AvalA", "𑅐𑅯𑅮" },
{ "AgarA", "𑅐𑅗𑅭" },
{ "upama", "𑅒𑅨𑅬" },
{ "iMdaura", "𑅑𑅧𑅥𑅒𑅭" },
}
self:do_tests(examples, "mwr")
end
function tests:test_Old_Persian()
local examples = {
{ "aitiiy", "𐎠𐎡𐎫𐎡𐎹" },
{ "raucah", "𐎼𐎢𐎨𐏃" },
{ "ham", "𐏃𐎶" },
{ "jiva", "𐎪𐎺"},
{ "daraniyakara", "𐎭𐎼𐎴𐎹𐎣𐎼" },
{ "daragama", "𐎭𐎼𐎥𐎶" },
}
self:do_tests(examples, "peo")
end
function tests:test_Parthian()
local examples = {
{ "tšynd", "𐫤𐫢𐫏𐫗𐫅" },
{ "xʾrtʾg", "𐫟𐫀𐫡𐫤𐫀𐫃" },
{ "hʾmhyrz", "𐫍𐫀𐫖𐫍𐫏𐫡𐫉" },
{ "ʿšnwhr", "𐫙𐫢𐫗𐫇𐫍𐫡"},
{ "hʾwsʾr", "𐫍𐫀𐫇𐫘𐫀𐫡" },
}
self:do_tests(examples, "xpr", "Mani")
end
function tests:test_Japanese()
local examples = {
{ "iro ha nihoheto", "いろ は にほへと" },
{ "uwyi no okuyama", "うゐ の おくやま" },
{ "FAMIRI-MA-TO", "ファミリーマート" },
{ "altu", "あっ" },
{ "hi/mi/tu", "ひ・み・つ" },
{ "han'i", "はんい" },
{ "hanni", "はんい" },
{ "konnyou", "こんよう" },
{ "mannnaka", "まんなか" },
{ "attiike", "あっちいけ" },
{ "acchiike", "あっちいけ" },
{ "upnusi", "うpぬし" },
}
self:do_tests(examples, "ja")
end
function tests:test_Old_Church_Slavonic()
local examples = {
{ "ljudije", "людиѥ" },
{ "azuh", "азъ" },
{ "buky", "боукꙑ" },
{ "mŭčati", "мъчати" },
{ "Iosija", "Иосиꙗ" },
}
self:do_tests(examples, "cu")
end
local omr_examples = {
{ "kuhA", "kuhā", "𑘎𑘳𑘮𑘰" },
{ "nibara", "nibara", "𑘡𑘲𑘤𑘨" },
{ "nIbara", "nībara", "𑘡𑘲𑘤𑘨" },
{ "Ai", "āi", "𑘁𑘃" },
{ "AI", "āī", "𑘁𑘃" },
{ "suta", "suta", "𑘭𑘳𑘝" },
{ "sUta", "suta", "𑘭𑘳𑘝" },
{ "uta", "uta", "𑘄𑘝" },
{ "Uta", "uta", "𑘄𑘝" },
{ "na-i", "na-i", "𑘡𑘃" },
{ "naï", "naï", "𑘡𑘃" },
{ "a-ila", "a-ila", "𑘀𑘃𑘩" },
{ "aïla", "aïla", "𑘀𑘃𑘩" },
{ "bhavai", "bhavai", "𑘥𑘪𑘺" },
{ "cauka", "cauka", "𑘓𑘼𑘎" },
{ "ca-utha", "ca-utha", "𑘓𑘄𑘞" },
{ "caütha", "caütha", "𑘓𑘄𑘞" },
{ "a-ukSa", "a-ukṣa", "𑘀𑘄𑘎𑘿𑘬" },
{ "aükSa", "aükṣa", "𑘀𑘄𑘎𑘿𑘬" },
{ "AThoLI", "āṭhoḷī", "𑘁𑘙𑘻𑘯𑘲" },
{ "raMbhA", "raṃbhā", "𑘨𑘽𑘥𑘰" },
{ "hRdA", "hṛdā", "𑘮𑘵𑘟𑘰" },
{ "Rkha", "ṛkha", "𑘆𑘏" },
{ "SaDa", "ṣaḍa", "𑘬𑘚" },
{ "kSeNa", "kṣeṇa", "𑘎𑘿𑘬𑘹𑘜" },
{ "zobhaNe", "śobhaṇe", "𑘫𑘻𑘥𑘜𑘹" },
{ "arha", "arha", "𑘀𑘨𑘿𑘮" },
{ "mar_hATI", "maṟhāṭī", "𑘦𑘨𑘿𑘮𑘰𑘘𑘲" },
}
function tests:test_Old_Marathi()
self:do_tests(omr_examples, "omr")
end
function tests:test_Old_Marathi_tr()
self:do_tests(omr_examples, "omr-tr")
end
function tests:test_Ossetian()
local examples = {
{ "fynʒ", "фындз" },
{ "æxsæv", "ӕхсӕв" },
{ "c’æx", "цъӕх" },
{ "biræǧ", "бирӕгъ" },
{ "Ræstʒinad", "Рӕстдзинад" },
}
self:do_tests(examples, "os")
end
function tests:test_Imperial_Aramaic()
local examples = {
{ "'nḥn", "𐡀𐡍𐡇𐡍" },
}
self:do_tests(examples, "arc", "Armi")
end
function tests:test_Old_South_Arabian()
local examples = {
{ "s²ms¹", "𐩦𐩣𐩪" },
}
self:do_tests(examples, "xsa")
end
function tests:test_Siddham()
local examples = {
{ "kanta", "𑖎𑖡𑖿𑖝" },
{ "purAna", "𑖢𑖲𑖨𑖯𑖡"},
{ "Na-i", "𑖜𑖂"},
{ "kaNNa", "𑖎𑖜𑖿𑖜"},
{ "samAia", "𑖭𑖦𑖯𑖂𑖀"},
{ "tujjhu", "𑖝𑖲𑖕𑖿𑖖𑖲"},
{ "kahante", "𑖎𑖮𑖡𑖿𑖝𑖸"},
}
self:do_tests(examples, "inc-kam")
end
function tests:test_Kaithi()
local examples = {
{ "hanU", "𑂯𑂢𑂴" },
{ "pa_rh_ahi", "𑂣𑂜𑂯𑂱" },
{ "siya~", "𑂮𑂱𑂨𑂀" },
{ "jhara-i", "𑂕𑂩𑂅" },
{ "jharaï", "𑂕𑂩𑂅" },
{ "Agi", "𑂄𑂏𑂱" },
{ "āgi", "𑂄𑂏𑂱" },
}
self:do_tests(examples, "bho")
end
function tests:test_Saurashtra()
local examples = {
{ "pani", "ꢦꢥꢶ" },
{ "vAg", "ꢮꢵꢔ꣄" },
{ "ghoDo", "ꢕꣁꢞꣁ" },
{ "dukkar", "ꢣꢸꢒ꣄ꢒꢬ꣄" },
{ "l:ovo", "ꢭꢴꣁꢮꣁ" },
}
self:do_tests(examples, "saz")
end
function tests:test_Sindhi()
local examples = {
{ "siMdhī", "𑋝𑋡𑋟𑋐𑋢" },
{ "bhAGo", "𑋖𑋠𑊿𑋧" },
{ "mAlu", "𑋗𑋠𑋚𑋣" },
{ "jeko", "𑋂𑋥𑊺𑋧" },
{ "xabara", "𑊻𑋩𑋔𑋙" },
{ "muqAmu", "𑋗𑋣𑊺𑋩𑋠𑋗𑋣" },
{ "meM", "𑋗𑋥𑋟" },
{ "gunAhu", "𑊼𑋣𑋑𑋠𑋞𑋣" },
{ "_gh_araza", "𑊼𑋩𑋙𑋂𑋩" },
{ "_gh_ufA", "𑊼𑋩𑋣𑋓𑋩𑋠" },
{ "bA_gh_u", "𑋔𑋠𑊼𑋩𑋣" },
{ "ba_gh_adAdu", "𑋔𑊼𑋩𑋏𑋠𑋏𑋣" },
{ "ghaTaNu", "𑊾𑋆𑋌𑋣" },
}
self:do_tests(examples, "sd")
end
--[[
function tests:test_Old_North_Arabian()
-- We need tests to verify that letters with diacritics or modifiers
-- transliterate correctly.
local examples = {
{ "'lšdy", "𐪑𐪁𐪆𐪕𐪚" },
}
self:do_tests(examples, "sem-tha")
end
--]]
--[[
To add another example, place the following code
within the braces of an "examples" table:
{ "shortcut", "expected result" },
{ "", "" },
or for Sanskrit,
{ "Harvard-Kyoto", "IAST", "Devanagari" },
{ "", "", "" },
]]
return tests
q4p085ojx0nvfgp1kmueoc9fnd40pue
Module:typing-aids/testcases/documentation
828
125507
193460
2024-03-08T22:46:39Z
en>WingerBot
0
clean up manually-specified categories for testcase modules (manually assisted)
193460
wikitext
text/x-wiki
{{#invoke:typing-aids/testcases|run_tests}}
{{module cat|-|Character insertion}}
l1q6b0fd8ljoi5agb7cbvf812en2hr5
193461
193460
2024-11-21T10:35:27Z
Lee
19
[[:en:Module:typing-aids/testcases/documentation]] වෙතින් එක් සංශෝධනයක්
193460
wikitext
text/x-wiki
{{#invoke:typing-aids/testcases|run_tests}}
{{module cat|-|Character insertion}}
l1q6b0fd8ljoi5agb7cbvf812en2hr5
Module talk:typing-aids/data/Armi
829
125508
193462
2024-11-21T10:39:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Armi]]
193462
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 62147916 || 2021-03-17T22:54:04Z || Erutuon || <nowiki>move function to [[Module:typing-aids/data helpers]]</nowiki>
|----
| 62147747 || 2021-03-17T22:29:36Z || Erutuon || <nowiki></nowiki>
|----
| 62147741 || 2021-03-17T22:29:02Z || Erutuon || <nowiki></nowiki>
|----
| 62147733 || 2021-03-17T22:28:06Z || Erutuon || <nowiki>process data table to decompose and separate into a table of single-letter replacements and multi-letter ones</nowiki>
|----
| 62142838 || 2021-03-17T01:33:08Z || Metaknowledge || <nowiki>fix mistakes, add alternatives</nowiki>
|----
| 52365602 || 2019-04-16T03:22:07Z || Victar || <nowiki>Created page with "local data = {} local U = mw.ustring.char local circumflex = U(0x302) -- circumflex local macron_below = U(0x331) -- macron below data = { [1] = { ["ʾ"] = "𐡀", -- a..."</nowiki>
|}
6my5suhg2is2m9ya72ov12r0rm3vqi1
Module:typing-aids/data/Armi
828
125509
193463
2024-11-21T10:39:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Armi]] ([[Module talk:typing-aids/data/Armi|history]])
193463
Scribunto
text/plain
local U = mw.ustring.char
local circumflex = U(0x302) -- circumflex
local macron_below = U(0x331) -- macron below
local data = {
["ʾ"] = "𐡀", ["'"] = "𐡀", -- aleph
["b"] = "𐡁", -- beth
["g"] = "𐡂", -- gimel
["d"] = "𐡃", -- daleth
["h"] = "𐡄", -- he
["w"] = "𐡅", -- waw
["z"] = "𐡆", -- zayin
["ḥ"] = "𐡇", -- heth
["ṭ"] = "𐡈", -- teth
["y"] = "𐡉", -- yodh
["k"] = "𐡊", -- khaph
["l"] = "𐡋", -- lamedh
["m"] = "𐡌", -- mem
["n"] = "𐡍", -- nun
["s"] = "𐡎", -- samekh
["ʿ"] = "𐡏", ["3"] = "𐡏", -- ayin
["p"] = "𐡐", -- pe
["c"] = "𐡑", ["ṣ"] = "𐡑", -- sadhe
["q"] = "𐡒", -- qoph
["r"] = "𐡓", -- resh
["š"] = "𐡔", ["s" .. circumflex] = "𐡔", -- shin
["t"] = "𐡕", -- taw
[" "] = "𐡗", -- section sign
["1"] = "𐡘", -- one
["2"] = "𐡙", -- two
["3"] = "𐡚", -- three
["10"] = "𐡛", -- ten
["20"] = "𐡜", -- twenty
["100"] = "𐡝", -- one hundred
["1000"] = "𐡞", -- one thousand
["10000"] = "𐡟", -- ten thousand
}
return require "Module:typing-aids/data helpers".split_single_and_multi_char(data)
o4rt1cz7do8dthkwsobqip54awjswmz
Module talk:typing-aids/data/Chrs
829
125510
193464
2024-11-21T10:39:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Chrs]]
193464
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 58150697 || 2019-12-12T02:42:58Z || Victar || <nowiki>Created page with "local data = {} local U = mw.ustring.char local dot_below = U(0x323) -- dot below local caron = U(0x30C) -- caron local circumflex = U(0x302) -- circumflex local macron =..."</nowiki>
|}
2kjy0l7emk7i6s2spueofcij40hhz9l
Module:typing-aids/data/Chrs
828
125511
193465
2024-11-21T10:39:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Chrs]] ([[Module talk:typing-aids/data/Chrs|history]])
193465
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local circumflex = U(0x302) -- circumflex
local macron = U(0x304) -- macron
local scaron = U(0x161) -- latin small letter s with caron
data = {
[1] = {
["’"] = "ʾ", ["a" .. circumflex] = "ʾ", ["a" .. macron] = "ʾ",
["B"] = "b",
["G"] = "ɣ",
["H" .. dot_below] = "x",
["L"] = "δ",
["e" .. circumflex] = "ʿ", ["e" .. macron] = "ʿ", ["E"] = "ʿ",
["S"] = "s" .. caron,
},
[2] = {
["s" .. caron] = "𐽁", -- shin
},
[3] = {
["ʾ"] = "𐾰", -- aleph
["A"] = "𐾱", -- small aleph
["β"] = "𐾲", -- beth
["ɣ"] = "𐾳", -- gimel
["d"] = "𐾴", -- daleth
["h"] = "𐾵", -- he
["w"] = "𐾶", -- waw
["W"] = "𐾷", -- curled waw
["z"] = "𐾸", -- zayin
["x"] = "𐾹", -- heth
["y"] = "𐾺", -- yodh
["k"] = "𐾻", -- kaph
["δ"] = "𐾼", -- lamedh
["m"] = "𐾽", -- mem
["n"] = "𐾾", -- nun
["s"] = "𐾿", -- samekh
["ʿ"] = "𐿀", -- ayin
["p"] = "𐿁", -- pe
["r"] = "𐿂", -- resh
[scaron] = "𐿃", -- shin
["t"] = "𐿄", -- taw
["1"] = "𐿅", -- one
["2"] = "𐿆", -- two
["3"] = "𐿇", -- three
["4"] = "𐿈", -- four
["10"] = "𐿉", -- ten
["20"] = "𐿊", -- twenty
["100"] = "𐿋", -- one hundred
},
}
return data
kbgb69fnhuzglgut2smwndhhhzqbfh0
Module talk:typing-aids/data/Cyrs
829
125512
193466
2024-11-21T10:40:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Cyrs]]
193466
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 57997008 || 2019-11-13T22:23:07Z || Victar || <nowiki>Undo revision 57994798 by [[Special:Contributions/Victar|Victar]] ([[User talk:Victar|talk]])</nowiki>
|----
| 57994798 || 2019-11-13T10:40:12Z || Victar || <nowiki></nowiki>
|----
| 54741225 || 2019-10-05T16:58:08Z || Erutuon || <nowiki>add capital replacements programmatically</nowiki>
|----
| 47555919 || 2017-09-23T21:42:09Z || Erutuon || <nowiki>fix shortcuts containing diacritics</nowiki>
|----
| 47552823 || 2017-09-23T19:02:50Z || Erutuon || <nowiki>fixed some conflicts</nowiki>
|----
| 47552667 || 2017-09-23T18:44:01Z || Erutuon || <nowiki>from [[Module talk:typing-aids#Early Cyrillic for OCS]]</nowiki>
|}
bjipxwsc8f1dtjx7fedwo84vi5ysj8o
Module:typing-aids/data/Cyrs
828
125513
193467
2024-11-21T10:40:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Cyrs]] ([[Module talk:typing-aids/data/Cyrs|history]])
193467
Scribunto
text/plain
local U = mw.ustring.char
local caron = U(0x30C)
local breve = U(0x306)
local grave = U(0x300)
local hook = U(0x328)
local data = {
{
-- Letters with diacritics in a shortcut sequence are decomposed before
-- being run through the substitutions
["s" .. caron .. "t"] = "щ",
},
{
["jo~"] = "ѭ",
["je~"] = "ѩ",
["i" .. breve] = "ь", -- ĭ
["e" .. caron] = "ѣ", -- ě
["o" .. hook] = "ѫ", -- ǫ
["e" .. hook] = "ѧ", -- ę
["i" .. grave] = "і", -- ì
["z" .. caron] = "ж", -- ž
["c" .. caron] = "ч", -- č
["s" .. caron] = "ш", -- š
["sht"] = "щ",
["u" .. breve] = "ъ", -- ŭ
},
{
["zh"] = "ж",
["d^z"] = "ѕ",
["i\\"] = "і",
["o_"] = "ѡ",
["ch"] = "ч",
["sh"] = "ш",
["uh"] = "ъ",
["ih"] = "ь",
["eh"] = "ѣ",
["ja"] = "ꙗ",
["je"] = "ѥ",
["ju"] = "ю",
["o~"] = "ѫ",
["jǫ"] = "ѭ",
["e~"] = "ѧ",
["ję"] = "ѩ",
["k^s"] = "ѯ",
["p^s"] = "ѱ",
["th"] = "ѳ",
["y\\"] = "ѵ",
},
{
["a"] = "а",
["b"] = "б",
["v"] = "в",
["g"] = "г",
["d"] = "д",
["e"] = "е",
["z"] = "з",
["i"] = "и",
["k"] = "к",
["l"] = "л",
["m"] = "м",
["n"] = "н",
["o"] = "о",
["p"] = "п",
["r"] = "р",
["s"] = "с",
["t"] = "т",
["u"] = "оу",
["f"] = "ф",
["x"] = "х",
["ō"] = "ѡ",
["c"] = "ц",
["y"] = "ꙑ",
["ξ"] = "ѯ",
["ѱ"] = "ѱ",
["θ"] = "ѳ",
["ü"] = "ѵ",
["q"] = "ҁ",
},
}
-- Add replacements for capitals: both an all-caps version ("JA")
-- and capitalized version ("Ja").
for _, replacements in ipairs(data) do
-- sortedPairs saves the list of table keys so that we can modify the table
-- while iterating over it.
for text, replacement in require "Module:table".sortedPairs(replacements) do
replacement = mw.ustring.upper(replacement)
replacements[mw.ustring.upper(text)] = replacement
replacements[mw.ustring.gsub(text, "^.", mw.ustring.upper)] = replacement
end
end
return data
nkuyobpkwoy4326lh8vbw40nxwqsnot
Module talk:typing-aids/data/Mani
829
125514
193468
2024-11-21T10:40:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Mani]]
193468
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 75389230 || 2023-07-22T06:28:22Z || Victar || <nowiki></nowiki>
|----
| 61876580 || 2021-02-23T05:37:51Z || Erutuon || <nowiki>error</nowiki>
|----
| 61876576 || 2021-02-23T05:36:38Z || Erutuon || <nowiki>gotta replace SS before S</nowiki>
|----
| 61876543 || 2021-02-23T05:31:26Z || Erutuon || <nowiki>Undo revision 61818580 by [[Special:Contributions/Victar|Victar]] ([[User talk:Victar|talk]]): now trying to enable Mani-tr replacements</nowiki>
|----
| 61818580 || 2021-02-14T10:26:33Z || Victar || <nowiki></nowiki>
|----
| 61816202 || 2021-02-14T01:00:55Z || Victar || <nowiki></nowiki>
|----
| 61816193 || 2021-02-14T00:54:55Z || Victar || <nowiki></nowiki>
|----
| 61816124 || 2021-02-14T00:27:25Z || Victar || <nowiki></nowiki>
|----
| 54434251 || 2019-09-29T20:32:29Z || Victar || <nowiki></nowiki>
|----
| 53203529 || 2019-05-30T19:39:40Z || Victar || <nowiki></nowiki>
|----
| 52008254 || 2019-03-18T19:58:28Z || Erutuon || <nowiki>all letters with combining diacritics before un-diacriticked letters</nowiki>
|----
| 52007985 || 2019-03-18T19:53:28Z || Erutuon || <nowiki>Undo revision 52007976 by [[Special:Contributions/Erutuon|Erutuon]] ([[User talk:Erutuon|talk]])</nowiki>
|----
| 52007976 || 2019-03-18T19:53:20Z || Erutuon || <nowiki>experiment</nowiki>
|----
| 51987576 || 2019-03-18T05:26:44Z || Victar || <nowiki></nowiki>
|----
| 51987568 || 2019-03-18T05:23:44Z || Victar || <nowiki></nowiki>
|----
| 50229508 || 2018-08-30T02:01:32Z || Victar || <nowiki></nowiki>
|----
| 49123433 || 2018-03-09T09:23:37Z || Victar || <nowiki></nowiki>
|----
| 49118741 || 2018-03-08T16:37:54Z || Victar || <nowiki></nowiki>
|----
| 49082297 || 2018-03-02T00:42:41Z || Victar || <nowiki></nowiki>
|----
| 49082055 || 2018-03-01T23:02:20Z || Victar || <nowiki></nowiki>
|----
| 49082027 || 2018-03-01T22:54:54Z || Victar || <nowiki></nowiki>
|----
| 49080678 || 2018-03-01T16:51:53Z || Victar || <nowiki></nowiki>
|----
| 49080484 || 2018-03-01T16:24:14Z || Victar || <nowiki>these are just problematic</nowiki>
|----
| 49080339 || 2018-03-01T15:56:57Z || Victar || <nowiki></nowiki>
|----
| 49077792 || 2018-03-01T05:37:50Z || Erutuon || <nowiki>resolve conflict between ssh and sh</nowiki>
|----
| 49077667 || 2018-03-01T04:42:31Z || Victar || <nowiki></nowiki>
|----
| 49077664 || 2018-03-01T04:40:20Z || Victar || <nowiki></nowiki>
|----
| 49077662 || 2018-03-01T04:39:41Z || Victar || <nowiki></nowiki>
|----
| 49077649 || 2018-03-01T04:33:14Z || Victar || <nowiki></nowiki>
|----
| 49077460 || 2018-03-01T03:06:39Z || Victar || <nowiki></nowiki>
|----
| 49077332 || 2018-03-01T02:18:04Z || Victar || <nowiki></nowiki>
|----
| 49077327 || 2018-03-01T02:16:05Z || Victar || <nowiki></nowiki>
|----
| 49077296 || 2018-03-01T01:56:56Z || Victar || <nowiki></nowiki>
|----
| 49077159 || 2018-03-01T00:30:37Z || Victar || <nowiki>Created page with "local U = mw.ustring.char local data = { { ["𐫀"] = "ʾ", -- aleph ["𐫁"] = "b", -- beth ["𐫂"] = "β", -- bheth ["𐫃"] = "g", -- gimel ["𐫄"] = "ɣ", -- ghim..."</nowiki>
|}
cs3hr139q9ot6tmuu18ec7q69xxuvx8
Module:typing-aids/data/Mani
828
125515
193471
2024-11-21T10:40:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Mani]] ([[Module talk:typing-aids/data/Mani|history]])
193471
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local acute = U(0x301) -- acute
local diaeresis = U(0x308) -- diaeresis
local dot_above = U(0x307) -- dot above
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local circumflex = U(0x302) -- circumflex
local macron = U(0x304) -- macron
local macron_below = U(0x331) -- macron below
local gcaron = U(0x1E7) -- latin small letter g with caron
local scaron = U(0x161) -- latin small letter s with caron
data = {
[1] = {
["g" .. caron] = "𐫄", -- ghimel
["h" .. macron_below] = "𐫆", -- he
["w" .. dot_above .. dot_below ] = "𐫈", -- ud (conjunction)
["t" .. macron_below] = "𐫎", -- teth
["k" .. diaeresis] = "𐫒", -- khaph
["ʿ" .. diaeresis] = "𐫚", -- aayin
["q" .. diaeresis] = "𐫠", -- qhoph
["s" .. caron] = "𐫢", -- shin
["s" .. acute] = "𐫣", -- sshin
},
[2] = {
["ʾ"] = "𐫀", -- aleph
["b"] = "𐫁", -- beth
["g"] = "𐫃", -- gimel
[gcaron] = "𐫄", -- ghimel
["d"] = "𐫅", -- daleth
["w"] = "𐫇", -- waw
["z"] = "𐫉", -- zayin
["j"] = "𐫋", -- jayin
["h"] = "𐫍", -- heth
["y"] = "𐫏", -- yodh
["k"] = "𐫐", -- kaph
["l"] = "𐫓", -- lamedh
["δ"] = "𐫔", -- dhamedh
["θ"] = "𐫕", -- thamedh
["m"] = "𐫖", -- mem
["n"] = "𐫗", -- nun
["ʿ"] = "𐫙", -- ayin
["p"] = "𐫛", -- pe
["f"] = "𐫜", -- fe
["c"] = "𐫝", -- sadhe
["q"] = "𐫞", -- qoph
["x"] = "𐫟", -- xophh
["r"] = "𐫡", -- resh
[scaron] = "𐫢", -- shin
["s"] = "𐫘", -- samekh
["t"] = "𐫤", -- taw
["100"] = "𐫯", -- one hundred
["10"] = "𐫭", -- ten
["1"] = "𐫫", -- one
["5"] = "𐫬", -- five
["20"] = "𐫮", -- twenty
},
}
data["Mani-tr"] = {
{
["SS"] = "s" .. acute,
},
{
["’"] = "ʾ", ["a" .. circumflex] = "ʾ", ["a" .. macron] = "ʾ", ["A"] = "ʾ",
["β"] = "b", ["B"] = "b",
["ɣ"] = gcaron, ["γ"] = gcaron, ["G"] = gcaron,
["H"] = "h" .. macron_below,
["W"] = "w" .. dot_above .. dot_below,
["z" .. circumflex] = "z", ["Z"] = "z",
["j" .. circumflex] = "j", ["J"] = "j",
["H"] = "h" .. macron_below,
["k" .. dot_above] = "k",
["D"] = "δ",
["T"] = "θ",
["e" .. diaeresis] = "ʿ" .. diaeresis,
["e" .. circumflex] = "ʿ", ["e" .. macron] = "ʿ", ["E"] = "ʿ",
["c" .. caron] = "c", ["C"] = "c",
["Q"] = "q" .. diaeresis,
["q" .. dot_above] = "x",
["s" .. dot_below] = "s" .. acute,
["S"] = "s" .. caron,
},
}
return data
7xyyq676idmvop5xo25v2vxgdkndo3x
Module talk:typing-aids/data/Narb
829
125516
193472
2024-11-21T10:40:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Narb]]
193472
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 62147962 || 2021-03-17T23:02:20Z || Erutuon || <nowiki>use a function to fix the ordering of replacements and replace ' with ʾ in Sarb table</nowiki>
|----
| 62142750 || 2021-03-17T00:43:35Z || Metaknowledge || <nowiki></nowiki>
|----
| 61172405 || 2020-11-22T05:14:49Z || Erutuon || <nowiki>two-letter combinations before one-letter ones, because there's a conflict between 3 and s3</nowiki>
|----
| 61172339 || 2020-11-22T05:04:16Z || Metaknowledge || <nowiki></nowiki>
|----
| 61172290 || 2020-11-22T04:52:28Z || Metaknowledge || <nowiki>Created page with "local U = mw.ustring.char local data = {} data["Narb"] = { [1] = { ["ʾ"] = "𐪑", ["ʿ"] = "𐪒", ["b"] = "𐪈", ["d"] = "𐪕", ["ḏ"] = "𐪙", ["ḍ"..."</nowiki>
|}
r4z43tlwk8jkrxip9st0oze496zuf6k
Module:typing-aids/data/Narb
828
125517
193473
2024-11-21T10:40:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Narb]] ([[Module talk:typing-aids/data/Narb|history]])
193473
Scribunto
text/plain
local U = mw.ustring.char
local data = {}
data["Narb"] = require "Module:typing-aids/data helpers".split_single_and_multi_char {
["s3"] = "𐪏",
["d_"] = "𐪙",
["h_"] = "𐪍",
["s1"] = "𐪊",
["s2"] = "𐪆",
["t_"] = "𐪛",
["ʾ"] = "𐪑",
["3"] = "𐪒",
["b"] = "𐪈",
["d"] = "𐪕",
["ḍ"] = "𐪓",
["f"] = "𐪐",
["g"] = "𐪔",
["ġ"] = "𐪖",
["h"] = "𐪀",
["ḥ"] = "𐪂",
["k"] = "𐪋",
["l"] = "𐪁",
["m"] = "𐪃",
["n"] = "𐪌",
["q"] = "𐪄",
["r"] = "𐪇",
["ṣ"] = "𐪎",
["t"] = "𐪉",
["ṭ"] = "𐪗",
["w"] = "𐪅",
["y"] = "𐪚",
["z"] = "𐪘",
["ẓ"] = "𐪜",
["ḏ"] = "𐪙",
["ẖ"] = "𐪍",
["ṯ"] = "𐪛",
}
data["Narb-tr"] = require "Module:typing-aids/data helpers".split_single_and_multi_char {
["s1"] = "s¹",
["s2"] = "s²",
["s3"] = "s³",
["h_"] = "ẖ",
["d_"] = "ḏ",
["t_"] = "ṯ",
["x"] = "ẖ",
["'"] = "ʾ",
["3"] = "ʿ",
}
return data
m87v3utjvv03x1sf11pacw8d2akbasm
Module talk:typing-aids/data/Orkh
829
125518
193474
2024-11-21T10:41:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Orkh]]
193474
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 73398510 || 2023-06-10T16:21:55Z || Yorınçga573 || <nowiki></nowiki>
|----
| 50250310 || 2018-09-02T03:41:22Z || Victar || <nowiki></nowiki>
|----
| 50226388 || 2018-08-29T23:19:14Z || Victar || <nowiki></nowiki>
|----
| 50226338 || 2018-08-29T23:07:51Z || Victar || <nowiki></nowiki>
|----
| 50225739 || 2018-08-29T20:19:15Z || Victar || <nowiki></nowiki>
|----
| 50225706 || 2018-08-29T20:09:03Z || Victar || <nowiki></nowiki>
|----
| 50225692 || 2018-08-29T20:03:10Z || Victar || <nowiki></nowiki>
|----
| 50225476 || 2018-08-29T19:41:53Z || Victar || <nowiki>Created page with "local data = { { ["a"] = "", -- ORKHON A ["i"] = "", -- ORKHON I ["U"] = "", -- ORKHON O ["u"] = "", -- ORKHON OE ["B"] = "", -- ORKHON AB ["b"] =..."</nowiki>
|}
di4txdbbdj7awgurbpjr686pfvna1xt
Module:typing-aids/data/Orkh
828
125519
193475
2024-11-21T10:41:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Orkh]] ([[Module talk:typing-aids/data/Orkh|history]])
193475
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local dot_above = U(0x307) -- dot above
local caron = U(0x30C) -- caron
data = {
[1] = {
["n" .. dot_above] = "y",
["ɲ"] = "F",
["c" .. caron] = "C",
["s" .. caron] = "Y",
},
[2] = {
["a"] = "𐰀", -- ORKHON A
["i"] = "𐰃", -- ORKHON I
["U"] = "𐰆", -- ORKHON O
["u"] = "𐰇", -- ORKHON OE
["B"] = "𐰉", -- ORKHON AB
["b"] = "𐰋", -- ORKHON AEB
["G"] = "𐰍", -- ORKHON AG
["g"] = "𐰏", -- ORKHON AEG
["D"] = "𐰑", -- ORKHON AD
["d"] = "𐰓", -- ORKHON AED
["z"] = "𐰔", -- ORKHON EZ
["J"] = "𐰖", -- ORKHON AY
["j"] = "𐰘", -- ORKHON AEY
["K"] = "𐰚", -- ORKHON AEK
["q"] = "𐰜", -- ORKHON OEK
["L"] = "𐰞", -- ORKHON AL
["l"] = "𐰠", -- ORKHON AEL
["w"] = "𐰡", -- ORKHON ELT
["m"] = "𐰢", -- ORKHON EM
["N"] = "𐰣", -- ORKHON AN
["n"] = "𐰤", -- ORKHON AEN
["O"] = "𐰦", -- ORKHON ENT
["W"] = "𐰨", -- ORKHON ENC
["F"] = "𐰪", -- ORKHON ENY
["y"] = "𐰭", -- ORKHON ENG
["p"] = "𐰯", -- ORKHON EP
["X"] = "𐰰", -- ORKHON OP
-- [""] = "𐰱", -- ORKHON IC
["C"] = "𐰲", -- ORKHON EC
["Q"] = "𐰴", -- ORKHON AQ
["k"] = "𐰶", -- ORKHON IQ
["x"] = "𐰸", -- ORKHON OQ
["R"] = "𐰺", -- ORKHON AR
["r"] = "𐰼", -- ORKHON AER
["S"] = "𐰽", -- ORKHON AS
["s"] = "𐰾", -- ORKHON AES
["c"] = "𐰿", -- ORKHON ASH
["Y"] = "𐱁", -- ORKHON ESH
["T"] = "𐱃", -- ORKHON AT
["t"] = "𐱅", -- ORKHON AET
-- [""] = "𐱇", -- ORKHON OT
["V"] = "𐱈", -- ORKHON BASH
},
}
return data
sqh85hrmyq7e32091yll2ws1ztxenwa
Module talk:typing-aids/data/Palm
829
125520
193476
2024-11-21T10:41:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Palm]]
193476
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 52387394 || 2019-04-19T19:08:54Z || Victar || <nowiki></nowiki>
|----
| 52387355 || 2019-04-19T19:00:23Z || Victar || <nowiki></nowiki>
|----
| 52387242 || 2019-04-19T18:19:40Z || Victar || <nowiki></nowiki>
|----
| 52387202 || 2019-04-19T18:08:33Z || Victar || <nowiki>Created page with "local data = {} local U = mw.ustring.char local circumflex = U(0x302) -- circumflex local macron_below = U(0x331) -- macron below data = { [1] = { ["ʾ"] = "𐡠", -- a..."</nowiki>
|}
4rvzlhlfabmkplzrafs6iquibks9hs0
Module:typing-aids/data/Palm
828
125521
193479
2024-11-21T10:41:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Palm]] ([[Module talk:typing-aids/data/Palm|history]])
193479
Scribunto
text/plain
local U = mw.ustring.char
local circumflex = U(0x302) -- circumflex
local macron_below = U(0x331) -- macron below
local data = {
{
["ʾ"] = "𐡠", -- aleph
["b"] = "𐡡", -- beth
["g"] = "𐡢", -- gimel
["d"] = "𐡣", -- daleth
["ẖ"] = "𐡤", ["h .. macron_below"] = "𐡤", -- he
["w"] = "𐡥", -- waw
["z"] = "𐡦", -- zayin
["h"] = "𐡧", -- heth
["ṯ"] = "𐡨", ["t .. macron_below"] = "𐡨", -- teth
["y"] = "𐡩", -- yodh
["k"] = "𐡪", -- kaph
["l"] = "𐡫", -- lamedh
["m"] = "𐡬", -- mem
["n"] = "𐡮", -- nun
["s"] = "𐡯", -- samekh
["ʿ"] = "𐡰", -- ayin
["p"] = "𐡱", -- pe
["c"] = "𐡲", -- sadhe
["q"] = "𐡳", -- qoph
["r"] = "𐡴", -- resh
["š"] = "𐡵", ["s .. circumflex"] = "𐡵", -- shin
["t"] = "𐡶", -- taw
["☙"] = "𐡷", -- left-pointing fleuron
["❧"] = "𐡸", -- right-pointing fleuron
["1"] = "𐡹", -- one
["2"] = "𐡺", -- two
["3"] = "𐡻", -- three
["4"] = "𐡼", -- four
["5"] = "𐡽", -- five
["10"] = "𐡾", -- ten
["20"] = "𐡿", -- twenty
},
{
["𐡮%f[%s%p%z]"] = "𐡭",
},
{
["𐡭%*"] = "𐡮", -- used to block conversion to final nun
["𐡭%-"] = "𐡮-", -- used to block conversion to final nun
}
}
return data
ckhy0nxprnl995rmvca3cz17417ptqy
Module talk:typing-aids/data/Phli
829
125522
193480
2024-11-21T10:41:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Phli]]
193480
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 62348381 || 2021-04-11T03:29:14Z || Victar || <nowiki></nowiki>
|----
| 62328102 || 2021-04-08T19:26:18Z || Victar || <nowiki></nowiki>
|----
| 61816241 || 2021-02-14T01:14:18Z || Erutuon || <nowiki>move pal-tr to [[Module:typing-aids/data/pal]]</nowiki>
|----
| 61816219 || 2021-02-14T01:07:49Z || Victar || <nowiki></nowiki>
|----
| 61816215 || 2021-02-14T01:05:50Z || Victar || <nowiki></nowiki>
|----
| 61816200 || 2021-02-14T01:00:41Z || Victar || <nowiki></nowiki>
|----
| 61816199 || 2021-02-14T01:00:08Z || Victar || <nowiki></nowiki>
|----
| 61816055 || 2021-02-13T23:52:28Z || Victar || <nowiki></nowiki>
|----
| 61816051 || 2021-02-13T23:50:28Z || Victar || <nowiki>Created page with "local data = {} local U = mw.ustring.char local dot_below = U(0x323) -- dot below local caron = U(0x30C) -- caron data = { [1] = { ["ʾ"] = "𐭠", -- aleph ["b"] = "..."</nowiki>
|}
l2oiu6ym6r1dd5wmze7q1ipbwhy6fvk
Module:typing-aids/data/Phli
828
125523
193481
2024-11-21T10:41:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Phli]] ([[Module talk:typing-aids/data/Phli|history]])
193481
Scribunto
text/plain
local U = mw.ustring.char
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local scaron = U(0x161) -- latin small letter s with caron
local data = {
[1] = {
["h" .. dot_below] = "𐭧", -- heth
["t" .. dot_below] = "𐭨", -- teth
["s" .. caron] = "𐭱", -- shin
},
[2] = {
["ʾ"] = "𐭠", -- aleph
["b"] = "𐭡", -- beth
["g"] = "𐭢", -- gimel
["d"] = "𐭣", -- daleth
["h"] = "𐭤", -- he
["w"] = "𐭥", -- waw-ayin-resh
["ʿ"] = "𐭥", -- waw-ayin-resh
["r"] = "𐭥", -- waw-ayin-resh
["z"] = "𐭦", -- zayin
["y"] = "𐭩", -- yodh
["k"] = "𐭪", -- kaph
["l"] = "𐭫", -- lamedh
["m"] = "𐭬", -- mem-qoph
["q"] = "𐭬", -- mem-qoph
["n"] = "𐭭", -- nun
["s"] = "𐭮", -- samekh
["p"] = "𐭯", -- pe
["c"] = "𐭰", -- sadhe
[scaron] = "𐭱", -- shin
["t"] = "𐭲", -- taw
},
}
data["Phli-tr"] = {
{
}
}
return data
lkdsdneft3nwtv5j166005fht6cga4w
Module talk:typing-aids/data/Phlv
829
125524
193482
2024-11-21T10:42:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Phlv]]
193482
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 61816267 || 2021-02-14T01:21:20Z || Erutuon || <nowiki>move shortcut-or-ASCII-to-transliteration stuff to [[Module:typing-aids/data/pal]]</nowiki>
|----
| 50229810 || 2018-08-30T02:12:58Z || Victar || <nowiki></nowiki>
|----
| 50229724 || 2018-08-30T02:11:24Z || Victar || <nowiki></nowiki>
|----
| 49189029 || 2018-03-20T16:59:36Z || Victar || <nowiki></nowiki>
|----
| 49186819 || 2018-03-20T03:06:53Z || Victar || <nowiki></nowiki>
|----
| 49186646 || 2018-03-20T01:54:35Z || Victar || <nowiki></nowiki>
|----
| 49186629 || 2018-03-20T01:39:56Z || Victar || <nowiki></nowiki>
|----
| 49186614 || 2018-03-20T01:24:49Z || Victar || <nowiki></nowiki>
|----
| 49186595 || 2018-03-20T01:14:56Z || Victar || <nowiki></nowiki>
|----
| 49186575 || 2018-03-20T01:02:40Z || Victar || <nowiki></nowiki>
|----
| 49185436 || 2018-03-19T18:46:14Z || Victar || <nowiki></nowiki>
|----
| 49182324 || 2018-03-19T03:50:50Z || Victar || <nowiki></nowiki>
|----
| 49182307 || 2018-03-19T03:47:46Z || Victar || <nowiki>Created page with "local data = { { ["ʾ"] = "", -- aleph-het ["’"] = "", -- aleph-het ["â"] = "", -- aleph-het ["ā"] = "", -- aleph-het ["ḥ"] = "", -- aleph..."</nowiki>
|}
q43iz4vapcrw2c0k6ek2n5atvr9b14q
Module:typing-aids/data/Phlv
828
125525
193483
2024-11-21T10:42:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Phlv]] ([[Module talk:typing-aids/data/Phlv|history]])
193483
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local circumflex = U(0x302) -- circumflex
local macron = U(0x304) -- macron
data = {
[1] = {
["ʾ"] = "", -- aleph-het
["h" .. dot_below] = "", -- aleph-het
["b"] = "", -- beth
["g"] = "", -- gimel-daleth-yodh with two dots above
["d"] = "", -- gimel-daleth-yodh with hat above
["y"] = "", -- gimel-daleth-yodh with two dots below
["j"] = "", -- gimel-daleth-yodh with dot below
["d" .. dot_below] = "", -- old daleth
["h"] = "", -- he
["w"] = "", -- waw-nun-ayin-resh
["n"] = "", -- waw-nun-ayin-resh
["'"] = "", -- waw-nun-ayin-resh
["r"] = "", -- waw-nun-ayin-resh
["z"] = "", -- zayin
["k"] = "", -- kaph
["γ"] = "", -- old kaph
["l"] = "", -- lamedh
["ƚ"] = "", -- old lamedh
["l" .. dot_below] = "", -- l-lamedh
["m"] = "", -- mem-qoph
["s"] = "", -- samekh
["p"] = "", -- pe
["c"] = "", -- sadhe
["s" .. caron] = "", -- shin
["t"] = "", -- taw
["x"] = "", -- x1
["x"] = "", -- x2
},
}
return data
948z2uqp9mvm9mb411r8bfkfm5sphxg
Module talk:typing-aids/data/Phnx
829
125526
193484
2024-11-21T10:42:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Phnx]]
193484
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 61818503 || 2021-02-14T10:15:55Z || Victar || <nowiki>Created page with "local U = mw.ustring.char local dot_below = U(0x323) -- dot below local caron = U(0x30C) -- caron local scaron = U(0x161) -- latin small letter s with caron local data = {..."</nowiki>
|}
bjzzwob7hp22kri7a39nneevywy67fp
Module:typing-aids/data/Phnx
828
125527
193485
2024-11-21T10:42:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Phnx]] ([[Module talk:typing-aids/data/Phnx|history]])
193485
Scribunto
text/plain
local U = mw.ustring.char
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local scaron = U(0x161) -- latin small letter s with caron
local data = {
[1] = {
["h" .. dot_below] = "𐤇", -- het
["t" .. dot_below] = "𐤈", -- tet
["s" .. dot_below] = "𐤑", -- sade
["s" .. caron] = "𐤔", -- shin
},
[2] = {
["ʾ"] = "𐤀", -- alf
["b"] = "𐤁", -- bet
["g"] = "𐤂", -- gaml
["d"] = "𐤃", -- dalt
["h"] = "𐤄", -- he
["w"] = "𐤅", -- wau
["z"] = "𐤆", -- zai
["y"] = "𐤉", -- yod
["k"] = "𐤊", -- kaf
["l"] = "𐤋", -- lamd
["m"] = "𐤌", -- mem
["n"] = "𐤍", -- nun
["s"] = "𐤎", -- samk
["ʿ"] = "𐤏", -- ain
["p"] = "𐤐", -- pe
["q"] = "𐤒", -- qof
["r"] = "𐤓", -- rosh
[scaron] = "𐤔", -- shin
["t"] = "𐤕", -- tau
},
}
return data
9tprcvbtism14rphab6eyeadndis4aa
Module:typing-aids/data/ar
828
125528
193486
2024-04-05T04:55:15Z
en>Theknightwho
0
Use faster implementation of mw.ustring.char.
193486
Scribunto
text/plain
U = require("Module:string/char")
local fatHa = U(0x64E)
local fatHatan = U(0x64B)
local kasratan = U(0x64D)
local Dammatan = U(0x64C)
local kasra = U(0x650)
local Damma = U(0x64F)
local superscript_alif = U(0x670)
local sukuun = U(0x652)
local shadda = U(0x651)
local vowel_diacritics = fatHa .. kasra .. Damma .. fatHatan .. kasratan .. Dammatan
local short_vowel = "[" .. fatHa .. kasra .. Damma .. "]"
local taTwiil = U(0x640)
local alif = "ا"
local waaw = "و"
local yaa = "ي"
local alif_maqSuura = "ى"
local laam = "ل"
local madda = "آ"
local waSla = "ٱ"
local hamza = "ء"
local alif_hamza = "أ"
local alif_hamza_below = "إ"
local yaa_hamza = "ئ"
local waaw_hamza = "ؤ"
local taa_marbuuTa = "ة"
local article = alif .. laam
local consonants = "بتثجحخدذرزسشصضطظعغقفلكمنءة"
local consonant = "[" .. consonants .. "]"
local sun_letters = "تثدذرزسشصضطظلن"
local sun_letter = "[" .. sun_letters .. "]"
-- Mostly [[w:Bikdash Arabic Transliteration Rules]], some [[w:Arabic chat alphabet]]
replacements = {
[1] = {
["eaa"] = madda,
["aaa"] = fatHa .. alif_maqSuura,
["_a"] = superscript_alif,
["t\'"] = taa_marbuuTa,
["z\'"] = "ذ",
["d\'"] = "ض",
["6\'"] = "ظ",
["3\'"] = "ع",
["5\'"] = "خ",
["al%-"] = article,
[","] = "،",
[";"] = "؛",
["?"] = "؟",
},
[2] = {
["aa"] = fatHa .. alif,
["ii"] = kasra .. yaa,
["uu"] = Damma .. waaw,
["aN"] = fatHatan,
["iN"] = kasratan,
["uN"] = Dammatan,
["A"] = alif,
["W"] = waSla,
["b"] = "ب",
["c"] = "ث",
["d"] = "د",
["e"] = hamza,
["2"] = hamza,
["'"] = hamza,
["E"] = "ع",
["3"] = "ع",
["`"] = "ع",
["f"] = "ف",
["D"] = "ض",
["g"] = "غ",
["h"] = "ه",
["H"] = "ح",
["7"] = "ح",
["j"] = "ج",
["k"] = "ك",
["K"] = "خ",
["l"] = "ل",
["L"] = "ﷲ", -- Allah ligature
["m"] = "م",
["n"] = "ن",
["p"] = "پ",
["q"] = "ق",
["r"] = "ر",
["s"] = "س",
["S"] = "ص",
["9"] = "ص",
["t"] = "ت",
["T"] = "ط",
["6"] = "ط",
["v"] = "ڤ",
["w"] = waaw,
["y"] = yaa,
["x"] = "ش",
["z"] = "ز",
["Z"] = "ظ",
["^%-"] = taTwiil,
["%s%-"] = taTwiil,
["%-$"] = taTwiil,
["%-%s"] = taTwiil,
},
[3] = {
["a"] = fatHa,
["i"] = kasra,
["u"] = Damma,
["^i"] = alif .. kasra,
["^u"] = alif .. Damma,
["([^" .. hamza .. taa_marbuuTa .. "])" .. fatHatan] = "%1" .. fatHatan .. alif,
["^(" .. article .. sun_letter .. ")"] = "%1" .. shadda,
["(%s" .. article .. sun_letter .. ")"] = "%1" .. shadda,
},
[4] = {
["(" .. consonant .. ")([^" .. vowel_diacritics .. "%s])"] = "%1" .. sukuun .. "%2",
["(" .. fatHa .. "[" .. waaw .. yaa .. "])" .. "([^" .. vowel_diacritics .. "])"] = "%1" .. sukuun .. "%2",
["^" .. hamza] = { alif_hamza, after = "[" .. fatHa .. Damma .. "]" },
["^" .. hamza .. kasra] = alif_hamza_below .. kasra,
["(%s)" .. hamza .. kasra] = "%1" .. alif_hamza_below .. kasra,
},
[5] = {
-- remove sukuun from definite article before sun letter
["^" .. article .. sukuun .. "(" .. sun_letter .. ")"] = article .. "%1",
["(%s)" .. article .. sukuun .. "(" .. sun_letter .. ")"] = "%1" .. article .. "%2",
-- to add final sukuun
["0"] = "",
["(" .. consonant .. ")" .. sukuun .. "%1"] = "%1" .. shadda,
["([" .. waaw .. yaa .. "])%1"] = "%1" .. shadda,
[kasra .. hamza] = kasra .. yaa_hamza,
[hamza .. kasra] = yaa_hamza .. kasra,
[Damma .. hamza] = {
Damma .. waaw_hamza,
after = "[^" .. kasra .. "]",
},
[fatHa .. hamza] = fatHa .. alif_hamza,
["([^" .. waaw .. alif .. "])" .. hamza .. fatHa] = "%1" .. alif_hamza .. fatHa,
},
[6] = {
["(" .. short_vowel .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. fatHa .. alif .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. kasra .. yaa .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. Damma .. waaw .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. fatHa .. alif_maqSuura .. "%s)" .. article] = "%1" .. waSla .. laam,
},
}
-- [""] = "",
return replacements
mcyhg9klcurun7n0qb3ytmlgp6qag52
193487
193486
2024-11-21T10:42:41Z
Lee
19
[[:en:Module:typing-aids/data/ar]] වෙතින් එක් සංශෝධනයක්
193486
Scribunto
text/plain
U = require("Module:string/char")
local fatHa = U(0x64E)
local fatHatan = U(0x64B)
local kasratan = U(0x64D)
local Dammatan = U(0x64C)
local kasra = U(0x650)
local Damma = U(0x64F)
local superscript_alif = U(0x670)
local sukuun = U(0x652)
local shadda = U(0x651)
local vowel_diacritics = fatHa .. kasra .. Damma .. fatHatan .. kasratan .. Dammatan
local short_vowel = "[" .. fatHa .. kasra .. Damma .. "]"
local taTwiil = U(0x640)
local alif = "ا"
local waaw = "و"
local yaa = "ي"
local alif_maqSuura = "ى"
local laam = "ل"
local madda = "آ"
local waSla = "ٱ"
local hamza = "ء"
local alif_hamza = "أ"
local alif_hamza_below = "إ"
local yaa_hamza = "ئ"
local waaw_hamza = "ؤ"
local taa_marbuuTa = "ة"
local article = alif .. laam
local consonants = "بتثجحخدذرزسشصضطظعغقفلكمنءة"
local consonant = "[" .. consonants .. "]"
local sun_letters = "تثدذرزسشصضطظلن"
local sun_letter = "[" .. sun_letters .. "]"
-- Mostly [[w:Bikdash Arabic Transliteration Rules]], some [[w:Arabic chat alphabet]]
replacements = {
[1] = {
["eaa"] = madda,
["aaa"] = fatHa .. alif_maqSuura,
["_a"] = superscript_alif,
["t\'"] = taa_marbuuTa,
["z\'"] = "ذ",
["d\'"] = "ض",
["6\'"] = "ظ",
["3\'"] = "ع",
["5\'"] = "خ",
["al%-"] = article,
[","] = "،",
[";"] = "؛",
["?"] = "؟",
},
[2] = {
["aa"] = fatHa .. alif,
["ii"] = kasra .. yaa,
["uu"] = Damma .. waaw,
["aN"] = fatHatan,
["iN"] = kasratan,
["uN"] = Dammatan,
["A"] = alif,
["W"] = waSla,
["b"] = "ب",
["c"] = "ث",
["d"] = "د",
["e"] = hamza,
["2"] = hamza,
["'"] = hamza,
["E"] = "ع",
["3"] = "ع",
["`"] = "ع",
["f"] = "ف",
["D"] = "ض",
["g"] = "غ",
["h"] = "ه",
["H"] = "ح",
["7"] = "ح",
["j"] = "ج",
["k"] = "ك",
["K"] = "خ",
["l"] = "ل",
["L"] = "ﷲ", -- Allah ligature
["m"] = "م",
["n"] = "ن",
["p"] = "پ",
["q"] = "ق",
["r"] = "ر",
["s"] = "س",
["S"] = "ص",
["9"] = "ص",
["t"] = "ت",
["T"] = "ط",
["6"] = "ط",
["v"] = "ڤ",
["w"] = waaw,
["y"] = yaa,
["x"] = "ش",
["z"] = "ز",
["Z"] = "ظ",
["^%-"] = taTwiil,
["%s%-"] = taTwiil,
["%-$"] = taTwiil,
["%-%s"] = taTwiil,
},
[3] = {
["a"] = fatHa,
["i"] = kasra,
["u"] = Damma,
["^i"] = alif .. kasra,
["^u"] = alif .. Damma,
["([^" .. hamza .. taa_marbuuTa .. "])" .. fatHatan] = "%1" .. fatHatan .. alif,
["^(" .. article .. sun_letter .. ")"] = "%1" .. shadda,
["(%s" .. article .. sun_letter .. ")"] = "%1" .. shadda,
},
[4] = {
["(" .. consonant .. ")([^" .. vowel_diacritics .. "%s])"] = "%1" .. sukuun .. "%2",
["(" .. fatHa .. "[" .. waaw .. yaa .. "])" .. "([^" .. vowel_diacritics .. "])"] = "%1" .. sukuun .. "%2",
["^" .. hamza] = { alif_hamza, after = "[" .. fatHa .. Damma .. "]" },
["^" .. hamza .. kasra] = alif_hamza_below .. kasra,
["(%s)" .. hamza .. kasra] = "%1" .. alif_hamza_below .. kasra,
},
[5] = {
-- remove sukuun from definite article before sun letter
["^" .. article .. sukuun .. "(" .. sun_letter .. ")"] = article .. "%1",
["(%s)" .. article .. sukuun .. "(" .. sun_letter .. ")"] = "%1" .. article .. "%2",
-- to add final sukuun
["0"] = "",
["(" .. consonant .. ")" .. sukuun .. "%1"] = "%1" .. shadda,
["([" .. waaw .. yaa .. "])%1"] = "%1" .. shadda,
[kasra .. hamza] = kasra .. yaa_hamza,
[hamza .. kasra] = yaa_hamza .. kasra,
[Damma .. hamza] = {
Damma .. waaw_hamza,
after = "[^" .. kasra .. "]",
},
[fatHa .. hamza] = fatHa .. alif_hamza,
["([^" .. waaw .. alif .. "])" .. hamza .. fatHa] = "%1" .. alif_hamza .. fatHa,
},
[6] = {
["(" .. short_vowel .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. fatHa .. alif .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. kasra .. yaa .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. Damma .. waaw .. "%s)" .. article] = "%1" .. waSla .. laam,
["(" .. fatHa .. alif_maqSuura .. "%s)" .. article] = "%1" .. waSla .. laam,
},
}
-- [""] = "",
return replacements
mcyhg9klcurun7n0qb3ytmlgp6qag52
Module talk:typing-aids/data/Sarb
829
125529
193488
2024-11-21T10:42:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Sarb]]
193488
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 62148054 || 2021-03-17T23:21:50Z || Erutuon || <nowiki>no ASCII shortcuts in Sarb table</nowiki>
|----
| 62147956 || 2021-03-17T23:00:39Z || Erutuon || <nowiki>use a function to fix the ordering of replacements</nowiki>
|----
| 62142820 || 2021-03-17T01:25:09Z || Metaknowledge || <nowiki>Created page with "local U = mw.ustring.char local data = {} data["Sarb"] = { [1] = { ["s3"] = "𐩯", ["d_"] = "𐩹", ["h_"] = "𐪍", ["s1"] = "𐩪", ["s2"] = "𐩦", ["t_"..."</nowiki>
|}
tqxokf4wpjrb9fkrppzuhsox6orpdtk
Module:typing-aids/data/Sarb
828
125530
193489
2024-11-21T10:42:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Sarb]] ([[Module talk:typing-aids/data/Sarb|history]])
193489
Scribunto
text/plain
local U = mw.ustring.char
local data = {}
data["Sarb"] = require "Module:typing-aids/data helpers".split_single_and_multi_char {
["s³"] = "𐩯",
["ḏ"] = "𐩹",
["ḫ"] = "𐪍",
["s¹"] = "𐩪",
["s²"] = "𐩦",
["ṯ"] = "𐩻",
["ʾ"] = "𐩱",
["ʿ"] = "𐩲",
["b"] = "𐩨",
["d"] = "𐩵",
["ḍ"] = "𐩳",
["f"] = "𐩰",
["g"] = "𐩴",
["ġ"] = "𐩶",
["h"] = "𐩠",
["ḥ"] = "𐩢",
["k"] = "𐩫",
["l"] = "𐩡",
["m"] = "𐩣",
["n"] = "𐩬",
["q"] = "𐩤",
["r"] = "𐩧",
["ṣ"] = "𐩮",
["t"] = "𐩩",
["ṭ"] = "𐩷",
["w"] = "𐩥",
["x"] = "𐩭",
["y"] = "𐩺",
["z"] = "𐩸",
["ẓ"] = "𐩼",
["ẖ"] = "𐪍",
["ṯ"] = "𐩻",
}
data["Sarb-tr"] = require "Module:typing-aids/data helpers".split_single_and_multi_char {
["s1"] = "s¹",
["s2"] = "s²",
["s3"] = "s³",
["h_"] = "ḫ",
["d_"] = "ḏ",
["t_"] = "ṯ",
["x"] = "ḫ",
["ẖ"]= "ḫ",
["'"] = "ʾ",
["3"] = "ʿ",
}
return data
cr2vadsrzm1xc9f8sppg8fw081p0ys1
Module:typing-aids/data/got
828
125531
193490
2017-04-23T05:12:28Z
en>Benwing2
0
strip remaining macrons
193490
Scribunto
text/plain
local U = mw.ustring.char
local macron = U(0x304)
local data = {}
data["got"] = {
[1] =
{
["a"] = "𐌰",
["b"] = "𐌱",
["g"] = "𐌲",
["d"] = "𐌳",
["e"] = "𐌴",
["q"] = "𐌵",
["z"] = "𐌶",
["h"] = "𐌷",
["þ"] = "𐌸",
["i"] = "𐌹",
["k"] = "𐌺",
["l"] = "𐌻",
["m"] = "𐌼",
["n"] = "𐌽",
["j"] = "𐌾",
["u"] = "𐌿",
["p"] = "𐍀",
-- [""] = "𐍁",
["r"] = "𐍂",
["s"] = "𐍃",
["t"] = "𐍄",
["w"] = "𐍅",
["f"] = "𐍆",
["x"] = "𐍇",
["ƕ"] = "𐍈",
["o"] = "𐍉",
[macron] = "",
-- [""] = "𐍊",
},
}
data["got-tr"] = {
[1] = {
["e" .. macron] = "e",
["o" .. macron] = "o",
["c"] = "þ",
["v"] = "ƕ",
},
}
return data
hbz5c57r6arxojkbsj8kurkpof1z972
193491
193490
2024-11-21T10:42:58Z
Lee
19
[[:en:Module:typing-aids/data/got]] වෙතින් එක් සංශෝධනයක්
193490
Scribunto
text/plain
local U = mw.ustring.char
local macron = U(0x304)
local data = {}
data["got"] = {
[1] =
{
["a"] = "𐌰",
["b"] = "𐌱",
["g"] = "𐌲",
["d"] = "𐌳",
["e"] = "𐌴",
["q"] = "𐌵",
["z"] = "𐌶",
["h"] = "𐌷",
["þ"] = "𐌸",
["i"] = "𐌹",
["k"] = "𐌺",
["l"] = "𐌻",
["m"] = "𐌼",
["n"] = "𐌽",
["j"] = "𐌾",
["u"] = "𐌿",
["p"] = "𐍀",
-- [""] = "𐍁",
["r"] = "𐍂",
["s"] = "𐍃",
["t"] = "𐍄",
["w"] = "𐍅",
["f"] = "𐍆",
["x"] = "𐍇",
["ƕ"] = "𐍈",
["o"] = "𐍉",
[macron] = "",
-- [""] = "𐍊",
},
}
data["got-tr"] = {
[1] = {
["e" .. macron] = "e",
["o" .. macron] = "o",
["c"] = "þ",
["v"] = "ƕ",
},
}
return data
hbz5c57r6arxojkbsj8kurkpof1z972
Module talk:typing-aids/data/Sogd
829
125532
193492
2024-11-21T10:43:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Sogd]]
193492
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 74734061 || 2023-06-26T20:09:10Z || Victar || <nowiki></nowiki>
|----
| 62198951 || 2021-03-20T09:31:57Z || Victar || <nowiki></nowiki>
|----
| 53470940 || 2019-06-28T03:53:49Z || Victar || <nowiki></nowiki>
|----
| 53470932 || 2019-06-28T03:52:45Z || Victar || <nowiki></nowiki>
|----
| 52364935 || 2019-04-16T00:25:51Z || Victar || <nowiki></nowiki>
|----
| 52364933 || 2019-04-16T00:25:35Z || Victar || <nowiki></nowiki>
|----
| 50229620 || 2018-08-30T02:07:29Z || Victar || <nowiki></nowiki>
|----
| 49120134 || 2018-03-08T21:39:10Z || Victar || <nowiki>Victar moved page [[Module:typing-aids/data/sog]] to [[Module:typing-aids/data/Sogd]] without leaving a redirect</nowiki>
|----
| 49118757 || 2018-03-08T16:40:35Z || Victar || <nowiki>Victar moved page [[Module:typing-aids/data/Sogd]] to [[Module:typing-aids/data/sog]] without leaving a redirect</nowiki>
|----
| 49118746 || 2018-03-08T16:38:15Z || Victar || <nowiki></nowiki>
|----
| 49118158 || 2018-03-08T15:11:00Z || Victar || <nowiki></nowiki>
|----
| 49117729 || 2018-03-08T13:53:39Z || Victar || <nowiki></nowiki>
|----
| 49117714 || 2018-03-08T13:47:37Z || Victar || <nowiki></nowiki>
|----
| 49116556 || 2018-03-08T06:07:27Z || Victar || <nowiki>Created page with "local U = mw.ustring.char local data = { { ["ʾ"] = "𐼰", -- aleph ["’"] = "𐼰", -- aleph ["â"] = "𐼰", -- aleph ["ā"] = "𐼰", -- aleph ["A"] = "𐼰",..."</nowiki>
|}
huswe3ls0g49y8n32d2ki45qc9s3jhi
Module:typing-aids/data/Sogd
828
125533
193493
2024-11-21T10:43:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Sogd]] ([[Module talk:typing-aids/data/Sogd|history]])
193493
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local circumflex = U(0x302) -- circumflex
local macron = U(0x304) -- macron
local scaron = U(0x161) -- latin small letter s with caron
data = {
[1] = {
["c" .. caron] = "𐼿", -- sadhe
["g" .. caron] = "𐼲", -- gimel
["s" .. caron] = "𐽁", -- shin
},
[2] = {
["ʾ"] = "𐼰", -- aleph
["β"] = "𐼱", -- beth
["ɣ"] = "𐼲", -- gimel
["h"] = "𐼳", -- he
["w"] = "𐼴", -- waw
["z"] = "𐼵", -- zayin
["x"] = "𐼶", -- heth
["y"] = "𐼷", -- yodh
["k"] = "𐼸", -- kaph
["δ"] = "𐼹", -- lamedh
["m"] = "𐼺", -- mem
["n"] = "𐼻", -- nun
["s"] = "𐼼", -- samekh
["ʿ"] = "𐫙", -- ayin
["p"] = "𐼾", -- pe
["c"] = "𐼿", -- sadhe
["r"] = "𐽀", -- resh-ayin
[scaron] = "𐽁", -- shin
["t"] = "𐽂", -- taw
["f"] = "𐽃", -- feth
["l"] = "𐽄", -- lesh
["100"] = "𐽔", -- one hundred
["10"] = "𐽒", -- ten
["1"] = "𐽑", -- one
["20"] = "𐽓", -- twenty
},
}
data["Sogd-tr"] = {
{
["’"] = "ʾ", ["a" .. circumflex] = "ʾ", ["a" .. macron] = "ʾ", ["A"] = "ʾ",
["B"] = "b",
["G"] = "ɣ",
["H" .. dot_below] = "x",
["L"] = "δ",
["e" .. circumflex] = "ʿ", ["e" .. macron] = "ʿ", ["E"] = "ʿ",
["S"] = "s" .. caron,
},
}
return data
soswlqs58nu0xiz6zhk7ae395dmva6k
Module talk:typing-aids/data/Sogo
829
125534
193494
2024-11-21T10:43:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Sogo]]
193494
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 74733021 || 2023-06-26T19:55:04Z || Victar || <nowiki></nowiki>
|----
| 74732945 || 2023-06-26T19:54:03Z || Victar || <nowiki></nowiki>
|----
| 62107678 || 2021-03-14T22:48:31Z || Victar || <nowiki></nowiki>
|----
| 57948761 || 2019-11-05T08:19:22Z || Victar || <nowiki></nowiki>
|----
| 57948689 || 2019-11-05T07:46:43Z || Victar || <nowiki></nowiki>
|----
| 57948666 || 2019-11-05T07:41:19Z || Victar || <nowiki></nowiki>
|----
| 57948663 || 2019-11-05T07:40:23Z || Victar || <nowiki>Created page with "local data = {} local U = mw.ustring.char local caron = U(0x30C) -- caron local circumflex = U(0x302) -- circumflex local macron = U(0x304) -- macron local scaron = U(0x16..."</nowiki>
|}
a919762aov2h87vy26b3wchtit1mr8d
Module:typing-aids/data/Sogo
828
125535
193495
2024-11-21T10:43:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Sogo]] ([[Module talk:typing-aids/data/Sogo|history]])
193495
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local circumflex = U(0x302) -- circumflex
local macron = U(0x304) -- macron
local gcaron = U(0x1E7) -- latin small letter g with caron
local scaron = U(0x161) -- latin small letter s with caron
data = {
[1] = {
["g" .. caron] = "𐼄", -- gimel
["s" .. caron] = "𐫢", -- shin
},
[2] = {
["ʾ"] = "𐼀", -- aleph
["β"] = "𐼂", -- beth
["ɣ"] = "𐼄", -- gimel
["h"] = "𐼅", -- he
["w"] = "𐼇", -- waw
["z"] = "𐼈", -- zayin
["x"] = "𐼉", -- heth
["y"] = "𐼊", -- yodh
["k"] = "𐼋", -- kaph
["δ"] = "𐼌", -- lamedh
["m"] = "𐼍", -- mem
["n"] = "𐼎", -- nun
["s"] = "𐼑", -- samekh
["ʿ"] = "𐼒", -- ayin
["p"] = "𐼔", -- pe
["c"] = "𐼕", -- sadhe
["r"] = "𐼘", -- resh-ayin-daleth
[scaron] = "𐼙", -- shin
["t"] = "𐼚", -- taw
["1/2"] = "𐼦", -- 1/2
["100"] = "𐼥", -- one hundred
["10"] = "𐼢", -- ten
["1"] = "𐼝", -- one
["20"] = "𐼣", -- twenty
["2"] = "𐼞", -- two
["30"] = "𐼤", -- thirty
["3"] = "𐼟", -- three
["4"] = "𐼠", -- four
["5"] = "𐼡", -- five
},
[3] = {
["𐼀%f[%s%p%z]"] = "𐼁", -- final aleph
["𐼂%f[%s%p%z]"] = "𐼃", -- final beth
["𐼅%f[%s%p%z]"] = "𐼆", -- final he
["𐼎%f[%s%p%z]"] = "𐼏", -- final nun
["𐼕%f[%s%p%z]"] = "𐼖", -- final sadhe
["𐼚%f[%s%p%z]"] = "𐼛", -- final taw
},
}
data["Sogo-tr"] = {
{
["’"] = "ʾ", ["a" .. circumflex] = "ʾ", ["a" .. macron] = "ʾ", ["a"] = "ʾ",
["‘"] = "ʿ", ["e" .. circumflex] = "ʿ", ["e" .. macron] = "ʿ", ["e"] = "ʿ",
["b"] = "β", ["B"] = "β",
["c" .. caron] = "c", ["S" .. caron] = "c",
["d"] = "δ", ["L"] = "δ",
["g" .. caron] = "ɣ", ["G"] = "ɣ",
["H" .. dot_below] = "x",
["S"] = scaron, ["s" .. caron] = scaron,
},
}
return data
cu9viaf3ugmdy62dlis5n4xmhuj6lr5
Module talk:typing-aids/data/Ugar
829
125536
193496
2024-11-21T10:43:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Ugar]]
193496
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 61818834 || 2021-02-14T11:36:26Z || Victar || <nowiki></nowiki>
|----
| 61818688 || 2021-02-14T10:57:57Z || Victar || <nowiki>Created page with "local data = {} local U = mw.ustring.char local hook_above = U(0x0309) -- hook above local breve_below = U(0x032E) -- breve below local dot_below = U(0x323) -- dot below l..."</nowiki>
|}
83u0ob0ylao5byf2qgz4ddfuj77wx5o
Module:typing-aids/data/Ugar
828
125537
193497
2024-11-21T10:43:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/Ugar]] ([[Module talk:typing-aids/data/Ugar|history]])
193497
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local hook_above = U(0x309) -- hook above
local breve_below = U(0x32E) -- breve below
local dot_below = U(0x323) -- dot below
local caron = U(0x30C) -- caron
local line_below = U(0x331) -- line below
local dot_above = U(0x307) -- dot above
local acute = U(0x301) -- acute
local circumflex_below = U(0x32D) -- circumflex below
local ahook = U(0x1EA3) -- latin small letter a with hook above
local hbreve = U(0x1E2B) -- latin small letter h with breve below
local hdot = U(0x1E25) -- latin small letter h with dot below
local tdot = U(0x1E6D) -- latin small letter t with dot below
local scaron = U(0x161) -- latin small letter s with caron
local dline = U(0x161) -- latin small letter d with line below
local zdot = U(0x1E0F) -- latin small letter z with dot below
local sdot = U(0x1E63) -- latin small letter s with dot below
local tline = U(0x1E6F) -- latin small letter t with line below
local gdot = U(0x121) -- latin small letter g with dot above
local ihook = U(0x1EC9) -- latin small letter i with hook above
local uhook = U(0x1EE7) -- latin small letter u with hook above
local sacute = U(0x15B) -- latin small letter s with acute
local kdot = U(0x1E33) -- latin small letter k with dot below
local dcircumflex = U(0x1E13) -- latin small letter d with circumflex below
data = {
[1] = {
["a"] = ahook,
["i"] = ihook,
["u"] = uhook,
["θ"] = tlne,
["ð"] = dline,
["x"] = hbreve, ["ẖ"] = hbreve,
["ɣ"] = gdot, ["ḡ"] = gdot,
["ħ"] = hdo,
["k" .. dot_below] = "q", [kdot] = "q",
["d" .. circumflex_below] = zdot, [dcircumflex] = zdot,
},
[2] = {
["a" .. hook_above] = "𐎀",
["h" .. breve_below] = "𐎃",
["h" .. dot_below] = "𐎈",
["t" .. dot_below] = "𐎉",
["s" .. caron] = "𐎌",
["d" .. line_below] = "𐎏",
["z" .. dot_below] = "𐎑",
["s" .. dot_below] = "𐎕",
["t" .. line_below] = "𐎘",
["g" .. dot_above] = "𐎙",
["ỉ" .. hook_above] = "𐎛",
["u" .. hook_above] = "𐎜",
["s" .. acute] = "𐎝",
},
[3] = {
[ahook] = "𐎀",
["b"] = "𐎁",
["g"] = "𐎂",
[hbreve] = "𐎃",
["d"] = "𐎄",
["h"] = "𐎅",
["w"] = "𐎆",
["z"] = "𐎇",
[hdot] = "𐎈",
[tdot] = "𐎉",
["y"] = "𐎊",
["k"] = "𐎋",
[scaron] = "𐎌",
["l"] = "𐎍",
["m"] = "𐎎",
[dline] = "𐎏",
["n"] = "𐎐",
[zdot] = "𐎑",
["s"] = "𐎒",
["ʿ"] = "𐎓",
["p"] = "𐎔",
[sdot] = "𐎕",
["q"] = "𐎖",
["r"] = "𐎗",
[tline] = "𐎘",
[gdot] = "𐎙",
["t"] = "𐎚",
[ihook] = "𐎛",
[uhook] = "𐎜",
[sacute] = "𐎝",
["·"] = "𐎟", -- word divider
},
}
return data
npm8fks6e3qhy7pp1sjik3iz05y2uql
Module talk:typing-aids/data/ae
829
125538
193498
2024-11-21T10:44:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/ae]]
193498
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 42726683 || 2017-04-27T14:27:02Z || Benwing2 || <nowiki>fix for a+macron+ring; still problems with a+ring</nowiki>
|----
| 42691519 || 2017-04-24T01:21:55Z || Benwing2 || <nowiki>changes by [[User:Aryamanarora]] should no longer be needed</nowiki>
|----
| 42691488 || 2017-04-24T01:14:21Z || AryamanA || <nowiki></nowiki>
|----
| 42690591 || 2017-04-23T22:27:31Z || AryamanA || <nowiki>i hope i fixed it</nowiki>
|----
| 42588584 || 2017-04-07T08:46:56Z || Mahagaja || <nowiki>*That's* what the problem was? That should just be fixed directly in the page, no need to clutter up the module with this</nowiki>
|----
| 42580148 || 2017-04-06T00:53:26Z || Erutuon || <nowiki>add Cyrillic-to-Latin conversions, just in case</nowiki>
|----
| 42573080 || 2017-04-04T20:25:18Z || Benwing2 || <nowiki>keep hyphens for now until we decide what to do for sure; can always remove them later but not easily add them back</nowiki>
|----
| 42570156 || 2017-04-04T11:48:33Z || Benwing2 || <nowiki></nowiki>
|----
| 42565888 || 2017-04-03T20:09:13Z || Benwing2 || <nowiki></nowiki>
|----
| 42565730 || 2017-04-03T19:48:59Z || Benwing2 || <nowiki>alternative notation for γ</nowiki>
|----
| 42565134 || 2017-04-03T17:56:21Z || Benwing2 || <nowiki>map w to uu</nowiki>
|----
| 42565123 || 2017-04-03T17:52:48Z || Benwing2 || <nowiki></nowiki>
|----
| 42565110 || 2017-04-03T17:48:22Z || Benwing2 || <nowiki>handle ǰ, an alternative notation for what we notate as just j</nowiki>
|----
| 42565054 || 2017-04-03T17:34:16Z || Benwing2 || <nowiki>map "wrong" schwa to right one</nowiki>
|----
| 42564848 || 2017-04-03T16:44:22Z || Benwing2 || <nowiki>avoid ASCII sequences that actually occur</nowiki>
|----
| 42341633 || 2017-02-25T21:55:35Z || Erutuon || <nowiki></nowiki>
|----
| 42341622 || 2017-02-25T21:52:41Z || Erutuon || <nowiki>switch to combining diacritics for easier processing</nowiki>
|----
| 42341353 || 2017-02-25T21:00:01Z || Erutuon || <nowiki>remove double vowel shortcuts</nowiki>
|----
| 42341317 || 2017-02-25T20:53:40Z || Erutuon || <nowiki>alternative shortcuts for diacritics and for other characters</nowiki>
|----
| 42341265 || 2017-02-25T20:46:49Z || Erutuon || <nowiki>moved from [[Module:typing-aids/data]]</nowiki>
|}
k3el7cgecm6gdav2a9efyhx5qkf98vn
Module:typing-aids/data/ae
828
125539
193499
2024-11-21T10:44:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/ae]] ([[Module talk:typing-aids/data/ae|history]])
193499
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local macron = U(0x304) -- macron
local dot_above = U(0x307) -- dot above
local ogonek = U(0x328) -- ogonek
local acute = U(0x301) -- acute
local caron = U(0x30C) -- caron
local dot_below = U(0x323) -- dot below
local tilde = U(0x303) -- tilde
local tilde_below = U(0x330) -- tilde below
local ring = U(0x30A) -- ring above
data["ae"] = {
[1] = {
["ə" .. macron] = "𐬇",
["a" .. ogonek .. macron] = "𐬅",
["a" .. macron .. ring] = "𐬃",
["s" .. caron .. acute] = "𐬳",
["s" .. dot_below .. caron] = "𐬴",
["ŋ" .. acute] = "𐬣",
["ŋᵛ"] = "𐬤",
},
[2] = {
["a" .. macron] = "𐬁",
["a" .. ring] = "𐬂",
["a" .. ogonek] = "𐬄",
["ə"] = "𐬆",
["e" .. macron] = "𐬉",
["o" .. macron] = "𐬋",
["i" .. macron] = "𐬍",
["u" .. macron] = "𐬏",
["x" .. acute] = "𐬒",
["xᵛ"] = "𐬓",
["g" .. dot_above] = "𐬕",
["γ"] = "𐬖",
["θ"] = "𐬚",
["δ"] = "𐬜",
["t" .. tilde_below] = "𐬝",
["β"] = "𐬡",
["ŋ"] = "𐬢",
["n" .. acute] = "𐬦",
["n" .. dot_below] = "𐬧",
["m" .. ogonek] = "𐬨",
["y" .. dot_above] = "𐬪",
["s" .. caron] = "𐬱",
["z" .. caron] = "𐬲",
},
[3] = {
["a"] = "𐬀",
["e"] = "𐬈",
["o"] = "𐬊",
["i"] = "𐬌",
["u"] = "𐬎",
["k"] = "𐬐",
["x"] = "𐬑",
["g"] = "𐬔",
["c"] = "𐬗",
["j"] = "𐬘",
["t"] = "𐬙",
["d"] = "𐬛",
["p"] = "𐬞",
["f"] = "𐬟",
["b"] = "𐬠",
["n"] = "𐬥",
["m"] = "𐬨",
["y"] = "𐬫",
["v"] = "𐬬",
["r"] = "𐬭",
["l"] = "𐬮",
["s"] = "𐬯",
["z"] = "𐬰",
["h"] = "𐬵",
["%*"] = "⸱",
["%."] = "𐬹",
},
[4] = {
["%-"] = "-",
},
}
data["ae-tr"] = {
[1] = {
["_"] = macron,
["@"] = "ə",
["ǝ"] = "ə", -- map "wrong" schwa to right one
["ð"] = "δ", -- map alternative notation for δ
["ɣ"] = "γ", -- map alternative notation for γ
["j" .. caron] = "j", -- map ǰ (alternative notation) to regular j
["c" .. caron] = "c", -- map č (alternative notation) to regular c
["%*"] = dot_above,
["`"] = ogonek,
["'"] = acute,
["%^"] = caron,
["%."] = dot_below,
["~"] = tilde_below,
["0"] = ring,
["aaE"] = "ə̄",
["aaN"] = "ą̇",
["aaO"] = "ā̊",
["shy"] = "š́",
["ssh"] = "ṣ̌",
["ngy"] = "ŋ́",
["ngv"] = "ŋᵛ",
["w"] = "uu", -- map alternative notation for uu
["t" .. tilde] = "t̰", -- map alternative notation for t̰
},
[2] = {
["aO"] = "å",
["aN"] = "ą",
["aE"] = "ə",
["xy"] = "x́",
["xv"] = "xᵛ",
["gg"] = "ġ",
["gh"] = "γ",
["G"] = "γ",
["th"] = "θ",
["dh"] = "δ",
["T"] = "θ",
["D"] = "δ",
["tt"] = "t̰",
["bh"] = "β",
["B"] = "β",
["N"] = "ŋ",
["ny"] = "ń",
["nn"] = "ṇ",
["hm"] = "m̨",
["yy"] = "ẏ",
["sh"] = "š",
["zh"] = "ž",
},
}
return data
a3uz62e6wsg6qt7b3yw66hws8lctzx7
Module talk:typing-aids/data/akk
829
125540
193500
2024-11-21T10:44:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/akk]]
193500
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 58233825 || 2019-12-26T14:51:28Z || Tom 144 || <nowiki></nowiki>
|----
| 58218969 || 2019-12-23T14:05:29Z || Tom 144 || <nowiki></nowiki>
|----
| 58137266 || 2019-12-09T16:23:15Z || Tom 144 || <nowiki></nowiki>
|----
| 58137253 || 2019-12-09T16:21:15Z || Tom 144 || <nowiki></nowiki>
|----
| 58066680 || 2019-11-25T21:26:00Z || Tom 144 || <nowiki></nowiki>
|----
| 58032375 || 2019-11-20T22:57:53Z || Tom 144 || <nowiki></nowiki>
|----
| 58032359 || 2019-11-20T22:49:51Z || Tom 144 || <nowiki></nowiki>
|----
| 58031010 || 2019-11-20T17:11:10Z || Tom 144 || <nowiki></nowiki>
|----
| 58030541 || 2019-11-20T15:33:58Z || Tom 144 || <nowiki></nowiki>
|----
| 58030456 || 2019-11-20T15:08:53Z || Tom 144 || <nowiki></nowiki>
|----
| 58030444 || 2019-11-20T15:04:47Z || Tom 144 || <nowiki></nowiki>
|----
| 58022322 || 2019-11-19T02:30:16Z || Tom 144 || <nowiki></nowiki>
|----
| 58022311 || 2019-11-19T02:24:46Z || Tom 144 || <nowiki></nowiki>
|----
| 53438478 || 2019-06-24T19:10:04Z || Tom 144 || <nowiki></nowiki>
|----
| 53396745 || 2019-06-21T16:06:38Z || Erutuon || <nowiki>localize variable</nowiki>
|----
| 53296682 || 2019-06-10T16:04:11Z || Tom 144 || <nowiki></nowiki>
|----
| 53291886 || 2019-06-09T17:08:14Z || Tom 144 || <nowiki></nowiki>
|----
| 53155740 || 2019-05-22T18:47:36Z || Tom 144 || <nowiki></nowiki>
|----
| 53155716 || 2019-05-22T18:42:13Z || Tom 144 || <nowiki></nowiki>
|----
| 53054279 || 2019-05-17T19:13:43Z || Tom 144 || <nowiki></nowiki>
|----
| 53054252 || 2019-05-17T19:04:39Z || Tom 144 || <nowiki></nowiki>
|----
| 53054239 || 2019-05-17T19:00:51Z || Tom 144 || <nowiki></nowiki>
|----
| 53054216 || 2019-05-17T18:55:04Z || Tom 144 || <nowiki></nowiki>
|----
| 52821816 || 2019-05-12T05:12:35Z || Tom 144 || <nowiki></nowiki>
|----
| 52821812 || 2019-05-12T05:11:07Z || Tom 144 || <nowiki></nowiki>
|----
| 52821800 || 2019-05-12T05:04:44Z || Tom 144 || <nowiki></nowiki>
|----
| 52817101 || 2019-05-11T16:13:56Z || Tom 144 || <nowiki></nowiki>
|----
| 52653924 || 2019-05-07T02:02:52Z || Tom 144 || <nowiki></nowiki>
|----
| 52653908 || 2019-05-07T01:59:20Z || Tom 144 || <nowiki></nowiki>
|----
| 52653888 || 2019-05-07T01:55:54Z || Tom 144 || <nowiki></nowiki>
|----
| 52653884 || 2019-05-07T01:54:22Z || Tom 144 || <nowiki></nowiki>
|----
| 52653871 || 2019-05-07T01:51:54Z || Tom 144 || <nowiki></nowiki>
|----
| 52653805 || 2019-05-07T01:42:24Z || Tom 144 || <nowiki></nowiki>
|----
| 52653749 || 2019-05-07T01:35:58Z || Tom 144 || <nowiki></nowiki>
|----
| 52634908 || 2019-05-05T05:40:00Z || Tom 144 || <nowiki></nowiki>
|----
| 52634896 || 2019-05-05T05:39:49Z || Tom 144 || <nowiki></nowiki>
|----
| 52436972 || 2019-04-23T22:48:44Z || Tom 144 || <nowiki></nowiki>
|----
| 52436473 || 2019-04-23T21:57:43Z || Tom 144 || <nowiki></nowiki>
|----
| 52405177 || 2019-04-22T23:42:45Z || Tom 144 || <nowiki>Yes, Akkadian distinguishes between "š", "s" and "ṣ".</nowiki>
|----
| 52405168 || 2019-04-22T23:40:28Z || Erutuon || <nowiki>plain s is used in transliteration, right?</nowiki>
|----
| 52405163 || 2019-04-22T23:38:11Z || Erutuon || <nowiki>"pre" not needed?</nowiki>
|----
| 52405154 || 2019-04-22T23:35:54Z || Erutuon || <nowiki>asterisks have to be escaped</nowiki>
|----
| 52405138 || 2019-04-22T23:32:43Z || Tom 144 || <nowiki></nowiki>
|----
| 52403177 || 2019-04-22T19:20:54Z || Tom 144 || <nowiki></nowiki>
|----
| 52402987 || 2019-04-22T19:03:37Z || Tom 144 || <nowiki></nowiki>
|----
| 52402924 || 2019-04-22T18:58:42Z || Tom 144 || <nowiki></nowiki>
|----
| 52402887 || 2019-04-22T18:54:44Z || Tom 144 || <nowiki></nowiki>
|----
| 52402866 || 2019-04-22T18:52:43Z || Tom 144 || <nowiki></nowiki>
|----
| 52402856 || 2019-04-22T18:51:37Z || Tom 144 || <nowiki></nowiki>
|----
| 52402810 || 2019-04-22T18:46:23Z || Tom 144 || <nowiki></nowiki>
|----
| 52402752 || 2019-04-22T18:41:07Z || Tom 144 || <nowiki></nowiki>
|----
| 52402744 || 2019-04-22T18:40:31Z || Tom 144 || <nowiki></nowiki>
|----
| 52401083 || 2019-04-22T12:37:19Z || Tom 144 || <nowiki></nowiki>
|----
| 52401032 || 2019-04-22T12:24:10Z || Tom 144 || <nowiki></nowiki>
|----
| 52398518 || 2019-04-22T02:38:55Z || Tom 144 || <nowiki></nowiki>
|----
| 52398512 || 2019-04-22T02:36:37Z || Tom 144 || <nowiki></nowiki>
|----
| 52398504 || 2019-04-22T02:31:24Z || Tom 144 || <nowiki></nowiki>
|----
| 52398500 || 2019-04-22T02:28:26Z || Tom 144 || <nowiki></nowiki>
|----
| 52398491 || 2019-04-22T02:23:32Z || Tom 144 || <nowiki></nowiki>
|----
| 52398393 || 2019-04-22T01:48:49Z || Tom 144 || <nowiki></nowiki>
|----
| 52398339 || 2019-04-22T01:30:05Z || Tom 144 || <nowiki></nowiki>
|----
| 52398336 || 2019-04-22T01:29:21Z || Tom 144 || <nowiki></nowiki>
|----
| 52398331 || 2019-04-22T01:28:02Z || Tom 144 || <nowiki></nowiki>
|----
| 52398296 || 2019-04-22T01:13:49Z || Tom 144 || <nowiki></nowiki>
|----
| 52398271 || 2019-04-22T01:00:49Z || Tom 144 || <nowiki></nowiki>
|----
| 52398267 || 2019-04-22T00:57:32Z || Tom 144 || <nowiki></nowiki>
|----
| 52398265 || 2019-04-22T00:56:24Z || Tom 144 || <nowiki></nowiki>
|----
| 52398260 || 2019-04-22T00:52:56Z || Tom 144 || <nowiki></nowiki>
|----
| 52398229 || 2019-04-22T00:41:16Z || Tom 144 || <nowiki></nowiki>
|----
| 52397987 || 2019-04-21T22:55:15Z || Tom 144 || <nowiki></nowiki>
|----
| 52397977 || 2019-04-21T22:52:48Z || Tom 144 || <nowiki></nowiki>
|----
| 52397963 || 2019-04-21T22:43:51Z || Tom 144 || <nowiki></nowiki>
|----
| 52397959 || 2019-04-21T22:40:50Z || Tom 144 || <nowiki></nowiki>
|----
| 52397913 || 2019-04-21T22:25:28Z || Tom 144 || <nowiki></nowiki>
|----
| 52397907 || 2019-04-21T22:22:38Z || Tom 144 || <nowiki></nowiki>
|----
| 52397898 || 2019-04-21T22:21:00Z || Tom 144 || <nowiki></nowiki>
|----
| 52397859 || 2019-04-21T22:15:46Z || Tom 144 || <nowiki></nowiki>
|----
| 52397849 || 2019-04-21T22:11:11Z || Tom 144 || <nowiki></nowiki>
|----
| 52397797 || 2019-04-21T21:50:48Z || Tom 144 || <nowiki></nowiki>
|----
| 52397705 || 2019-04-21T20:58:57Z || Erutuon || <nowiki>copy of [[Module:typing-aids/data/hit]] but with "hit" replaced with "akk": please Akkadian-ify!</nowiki>
|}
7wqr190eukkwf6sn28dsmf2vnmx6bn6
Module:typing-aids/data/akk
828
125541
193501
2024-11-21T10:44:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/akk]] ([[Module talk:typing-aids/data/akk|history]])
193501
Scribunto
text/plain
local replacements = {}
replacements["akk"] = {
-- V
["a"] = "𒀀",
["á"] = "𒀉",
["à"] = "𒂍",
["aʿ"] = "𒄴",
["e"] = "𒂊",
["é"] = "𒂍",
["è"] = "𒌓𒁺",
["eʿ"] = "𒄴",
["i"] = "𒄿",
["ì"] = "𒉌",
["iʿ"] = "𒄴",
["ʿi"] = "𒄭",
["u"] = "𒌋",
["ú"] = "𒌑",
["ù"] = "𒅇",
["uʿ"] = "𒄴",
["ʿ"] = "𒀪",
-- CV
-- Labials
["pa"] = "𒉺", ["pe"] = "𒉿", ["pi"] = "𒉿", ["pu"] = "𒁍",
["pé"] = "𒁁", ["pí"] = "𒁉",
["ba"] = "𒁀", ["be"] = "𒁁", ["bi"] = "𒁉", ["bu"] = "𒁍",
["bí"] = "𒉈", ["bé"] = "𒁉",
-- Alveolars
["ta"] = "𒋫", ["te"] = "𒋼", ["ti"] = "𒋾", ["tu"] = "𒌅",
["tú"] = "𒌓",
["da"] = "𒁕", ["de"] = "𒁲", ["di"] = "𒁲", ["du"] = "𒁺",
["dá"] = "𒋫", ["dú"] = "𒌅",
["tì"] = "𒋾",
["ṭa"] = "𒁕", ["ṭe"] = "𒁲", ["ṭi"] = "𒁲", ["ṭu"] = "𒂆",
["ṭá"] = "𒋫", ["ṭú"] = "𒌅",
["ṭà"] = "𒄭", ["ṭì"] = "𒋾", ["ṭè"] = "𒉈", ["ṭù"] = "𒁺",
-- Velars
["ka"] = "𒅗", ["ke"] = "𒆠", ["ki"] = "𒆠", ["ku"] = "𒆪",
["ká"] = "𒆍",
["ga"] = "𒂵", ["ge"] = "𒄀", ["gi"] = "𒄀", ["gu"] = "𒄖",
["qa"] = "𒋡", ["qi"] = "𒆥", ["qe"] = "𒆥", ["qu"] = "𒄣",
["qá"] = "𒂵", ["qí"] = "𒆠", ["qé"] = "𒆠", ["qú"] = "𒆪",
["qà"] = "𒅗",
["ḫa"] = "𒄩", ["ḫe"] = "𒄭", ["ḫi"] = "𒄭", ["ḫu"] = "𒄷",
["ḫé"] = "𒃶", ["ḫí"] = "𒃶",
-- Sibilants
["sa"] = "𒊓", ["se"] = "𒋛", ["si"] = "𒋛", ["su"] = "𒋢",
["sá"] = "𒁲", ["sé"] = "𒍣", ["sí"] = "𒍣", ["sú"] = "𒍪",
["sà"] = "𒍝",
["ṣa"] = "𒍝", ["ṣe"] = "𒍢", ["ṣi"] = "𒍢", ["ṣu"] = "𒍮",
["ṣé"] = "𒍣", ["ṣí"] = "𒍣", ["ṣú"] = "𒍪",
["ša"] = "𒊭", ["še"] = "𒊺", ["ši"] = "𒅆", ["šu"] = "𒋗",
["šá"] = "𒃻", ["šú"] = "𒋙",
["za"] = "𒍝", ["ze"] = "𒍣", ["zi"] = "𒍣", ["zu"] = "𒍪",
["zé"] = "𒍢", ["zí"] = "𒍢",
["zè"] = "𒍢",
-- Nasals
["ma"] = "𒈠", ["me"] = "𒈨", ["mé"] = "𒈪", ["mi"] = "𒈪", ["mu"] = "𒈬",
["na"] = "𒈾", ["ne"] = "𒉈", ["ni"] = "𒉌", ["nu"] = "𒉡",
["né"] = "𒉌",
-- Liquids
["la"] = "𒆷", ["le"] = "𒇷", ["li"] = "𒇷", ["lu"] = "𒇻",
["lá"] = "𒇲",
["ra"] = "𒊏", ["re"] = "𒊑", ["ri"] = "𒊑", ["ru"] = "𒊒",
["rá"] = "𒁺", ["rí"]="𒌷",
-- Approximants
["wa"] = "𒉿", ["we"] = "𒉿", ["wi"] = "𒉿", ["wu"] = "𒉿",
["ja"] = "𒅀", ["je"] = "𒅀", ["ji"] = "𒅀", ["ju"] = "𒅀",
-- VC
-- Labials
["ap"] = "𒀊", ["ep"] = "𒅁", ["ip"] = "𒅁", ["up"] = "𒌒",
["ab"] = "𒀊", ["eb"] = "𒅁", ["ib"] = "𒅁", ["ub"] = "𒌒",
-- Alveolars
["at"] = "𒀜", ["et"] = "𒀉", ["it"] = "𒀉", ["ut"] = "𒌓",
["ad"] = "𒀜", ["ed"] = "𒀉", ["id"] = "𒀉", ["ud"] = "𒌓",
["aṭ"] = "𒀜", ["eṭ"] = "𒀉", ["iṭ"] = "𒀉", ["uṭ"] = "𒌓",
-- Velars
["ak"] = "𒀝", ["ek"] = "𒅅", ["ik"] = "𒅅", ["uk"] = "𒊌",
["ag"] = "𒀝", ["eg"] = "𒅅", ["ig"] = "𒅅", ["ug"] = "𒊌",
["aq"] = "𒀝", ["eq"] = "𒅅", ["iq"] = "𒅅", ["uq"] = "𒊌",
["aḫ"] = "𒄴", ["eḫ"] = "𒄴", ["iḫ"]= "𒄴", ["uḫ"] = "𒄴",
-- Sibilants
["aš"] = "𒀸", ["iš"] = "𒅖", ["eš"] = "𒌍", ["uš"] = "𒍑",
["áš"] = "𒀾",["ìš"] = "𒌍",
["as"] = "𒊍", ["is"] = "𒄑", ["es"] = "𒄑", ["us"] = "𒊻",
["ás"] = "𒀾", ["ís"] = "𒅖", ["ús"] = "𒍑",
["ìs"] = "𒀊", ["ès"] = "𒀊",
["aṣ"] = "𒊍", ["iṣ"] = "𒄑", ["eṣ"] = "𒄑", ["uṣ"] = "𒊻",
["áṣ"] = "𒀾", ["íṣ"] = "𒅖", ["úṣ"] = "𒍑",
["ìṣ"] = "𒀊", ["èṣ"] = "𒀊",
["az"] = "𒊍", ["iz"] = "𒄑", ["ez"] = "𒄑", ["uz"] = "𒊻",
["áz"] = "𒀾", ["íz"] = "𒅖", ["úz"] = "𒍑",
["ìz"] = "𒀊", ["èz"] = "𒀊",
-- Nasals
["am"] = "𒄠", ["em"] = "𒅎", ["im"] = "𒅎", ["um"] = "𒌝",
["an"] = "𒀭", ["en"] = "𒂗", ["in"] = "𒅔", ["un"] = "𒌦",
-- Liquids
["al"] = "𒀠", ["el"] = "𒂖", ["il"] = "𒅋", ["ul"] = "𒌌",
["ar"] = "𒅈", ["er"] = "𒅕", ["ir"] = "𒅕", ["ur"] = "𒌨", ["úr"] = "𒌫",
-- VCV
--ḫ
["ḫal"] = "𒄬", ["ḫab"] = "𒆸", ["ḫap"] = "𒆸", ["ḫaš"] = "𒋻", ["ḫad"] = "𒉺", ["ḫat"] = "𒉺",
["ḫul"] = "𒅆", ["ḫub"] = "𒄽", ["ḫup"] = "𒄽", ["ḫar"] = "𒄯", ["ḫur"] = "𒄯",
--k/g/q
["gal"] = "𒃲", ["kal"] = "𒆗", ["gal₉"] = "𒆗", ["kam"] = "𒄰", ["gám"] = "𒄰",
["gan"] = "𒃶", ["kán"] = "𒃷", ["gán"] = "𒃷", ["kab"] = "𒆏", ["kap"] = "𒆏", ["gáb"] = "𒆏", ["gáp"] = "𒆏",
["kar"] = "𒋼𒀀", ["kàr"] = "𒃼", ["gàr"] = "𒃼", ["kaš"] = "𒁉", ["gaš"] = "𒁉",
["kad"] = "𒃰", ["kat"] = "𒃰", ["gad"] = "𒃰", ["gat"] = "𒃰", ["gaz"] = "𒄤",
["kir"] = "𒄫", ["gir"] = "𒄫", ["kiš"] = "𒆧", ["kid₉"] = "𒃰", ["kit₉"] = "𒃰",
["kal"] = "𒆗", ["kul"] = "𒆰", ["kúl"] = "𒄢", ["gul"] = "𒄢",
["kum"] = "𒄣", ["gum"] = "𒄣", ["qum"] = "𒄣", ["kur"] = "𒆳", ["kùr"] = "𒄥", ["gur"] = "𒄥",
--l
["lal"] = "𒇲", ["lam"] = "𒇴", ["lig"] = "𒌨", ["lik"] = "𒌨", ["lim"] = "𒅆", ["liš"] = "𒇺", ["luḫ"] = "𒈛", ["lum"] = "𒈝",
--m
["maḫ"] = "𒈤", ["man"] = "𒎙", ["mar"] = "𒈥", ["maš"] = "𒈦", ["meš"] = "𒈨𒌍",
["mil"] = "𒅖", ["mel"] = "𒅖", ["miš"] = "𒈩", ["mur"] = "𒄯", ["mut"] = "𒄷𒄭",
--n
["nam"] = "𒉆", ["nab"] = "𒀮", ["nap"] = "𒀮", ["nir"] = "𒉪", ["niš"] = "𒎙", ["núm"] = "𒈝",
--p/b
["pal"] = "𒁄", ["bal"] = "𒁄", ["pár"] = "𒈦", ["bar"] = "𒈦", ["paš"] = "𒄫",
["pád"] = "𒁁", ["pát"] = "𒁁", ["píd"] = "𒁁", ["pít"] = "𒁁", ["bil"] = "𒉈", ["pil"] = "𒉈", ["píl"] = "𒉋", ["bíl"] = "𒉋",
["pir"] = "𒌓", ["piš"] = "𒄫", ["biš"] = "𒄫", ["pùš"] = "𒄫", ["pur"] = "𒁓", ["bur"] = "𒁓",
--r
["rad"] = "𒋥", ["rat"] = "𒋥", ["riš"] = "𒊕", ["rum"] = "𒀸",
--s
["šaḫ"] = "𒋚", ["šag"] = "𒊕", ["šak"] = "𒊕", ["šal"] = "𒊩", ["šam"] = "𒌑", ["šàm"] = "𒉓",
["šab"] = "𒉺𒅁", ["šap"] = "𒉺𒅁", ["šar"] = "𒊬", ["šìp"] = "𒉺𒅁", ["šir"] = "𒋓", ["šum"] = "𒋳", ["šur"] = "𒋩",
--t/d
["taḫ"] = "𒈭", ["daḫ"] = "𒈭", ["túḫ"] = "𒈭", ["tág"] = "𒁖", ["ták"] = "𒁖", ["dag"] = "𒁖", ["dak"] = "𒁖", ["tùm"] = "𒉐",
["tal"] = "𒊑", ["dal"] = "𒊑", ["tam"] = "𒌓", ["tám"] = "𒁮", ["dam"] = "𒁮", ["tan"] = "𒆗", ["dan"] = "𒆗",
["tab"] = "𒋰", ["tap"] = "𒋰", ["dáb"] = "𒋰", ["dáp"] = "𒋰", ["tar"] = "𒋻",
["táš"] = "𒁹", ["dáš"] = "𒁹", ["tiš"] = "𒁹", ["diš"] = "𒁹",
["tàš"] = "𒀾", ["tin"] = "𒁷", ["tén"] = "𒁷", ["tim"] = "𒁴", ["ṭim"] = "𒁴", ["dim"] = "𒁴",
["dir"] = "𒋛𒀀", ["tir"] = "𒌁", ["ter"] = "𒌁", ["tíś"] = "𒌨", ["túl"] = "𒇥",
["tum"] = "𒌈", ["túm"] = "𒁺", ["dum"] = "𒌈", ["ṭum"] = "𒌈", ["tub"] = "𒁾", ["tup"] = "𒁾", ["ṭub"] = "𒁾", ["ṭup"] = "𒁾", ["dub"] = "𒁾",
["dup"] = "𒁾",
["ṭur"] = "𒄙", ["túr"] = "𒄙", ["dur"] = "𒄙",
--z/s
["zul"] = "𒂄", ["zum"] = "𒍮", ["súm"] = "𒍮", ["ṣum"] = "𒍮",
-- CVCV(C)
["gaba"] = "𒃮", ["gigir"] = "𒇀",
-- Determiners
["DIDLI"] ="𒀸", ["DINGIR"]="𒀭", ["d"]="𒀭", ["DUG"]="𒂁", ["É"]="𒂍", ["GAD"]="𒃰", ["GI"]="𒄀",
["GIŠ"]="𒄑", ["GUD"]="𒄞", ["ḪI.A"]="𒄭𒀀", ["ḪUR.SAG"]="𒄯𒊕", ["IM"]="𒅎",
["ITU"]="𒌚",
["KAM"]="𒄰", ["KI"]="𒆠", ["KUR"]="𒆳", ["KUŠ"]="𒋢", ["LÚ"]="𒇽", ["MEŠ"]="𒈨𒌍",
["MUL"]="𒀯",
["MUNUS"]="𒊩", ["MUŠ"]="𒈲", ["MUŠEN"]="𒄷", ["NINDA"]="𒃻", ["SAR"]="𒊬",
["SI"]="𒋛", ["SIG"]="𒋠", ["TÚG"]="𒌆", ["URU"]="𒌷", ["URUDU"]="𒍐", ["GABA"] = "𒃮", ["GIGIR"] = "𒇀",
["UZU"]="𒍜",
-- Logograms
["KASKAL"]="𒆜", ["LUGAL"]="𒈗", ["GÌR"]="𒄊", ["GÍR"]="𒄈", ["IGI"]="𒃲",
["SÍG"]="𒋠" ,["IŠKUR"]="𒅎", ["AN"]="𒀭", ["TÙM"] = "𒉐", ["GUL"] = "𒄢", ["UD"] = "𒌓", ["KÁ"] = "𒆍",
["RA"] = "𒊏", ["NA₄"] = "𒉌𒌓", ["AD"] = "𒀜", ["LÌL"] = "𒋙𒌍", ["LÍL"] = "𒆤", ["LÁ"] = "𒇲", ["IRI"] ="𒌷", ["RÍ"]="𒌷",
["AMA"] = "𒂼", ["GÁL"] = "𒅅", ["UTU"] = "𒌓", ["MURUB₄"] = "𒌘",
}
replacements["akk-tr"] = {
["a'"] = "aʿ", ["e'"] = "eʿ", ["i'"] = "iʿ", ["u'"] = "uʿ",
["s^"] = "š", ["s%*"] = "ṣ", ["t%*"] = "ṭ", ["h"] = "ḫ", ["4"] = "₄", ["5"] = "₅", ["9"] = "₉", ["a\\"] = "à",
}
return replacements
2gx07231lo09zrvqnz0t9whlx05ee3s
Module talk:typing-aids/data/bho
829
125542
193502
2024-11-21T10:44:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/bho]]
193502
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 81048709 || 2024-08-15T16:28:18Z || Kutchkutch || <nowiki></nowiki>
|----
| 81048693 || 2024-08-15T16:25:14Z || Kutchkutch || <nowiki></nowiki>
|----
| 81045227 || 2024-08-15T03:06:28Z || Kutchkutch || <nowiki></nowiki>
|----
| 81045216 || 2024-08-15T03:04:22Z || Kutchkutch || <nowiki>Kutchkutch moved page [[Module:typing-aids/data/Kthi]] to [[Module:typing-aids/data/bho]] without leaving a redirect</nowiki>
|----
| 81045197 || 2024-08-15T02:59:41Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = require("Module:string/char") local anusvAra = U(0x11081) local visarga = U(0x11082) local virAma = U(0x110B9) local nuktA = U(0x110BA) local candrabindu = U(0x11080) local avagraha = "ऽ" local consonants = "𑂍-𑂯" local consonant = "[" .. consonants .. "]" .. nuktA .. "?" local acute = U(0x301) -- combining acute data["Kthi"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "𑂊"}, {"au", "𑂌"}, {"ä"..."</nowiki>
|}
ogcuc3d5hia3h3gqiwmqak27jywni4h
Module:typing-aids/data/bho
828
125543
193503
2024-11-21T10:44:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/bho]] ([[Module talk:typing-aids/data/bho|history]])
193503
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local anusvAra = U(0x11081)
local visarga = U(0x11082)
local virAma = U(0x110B9)
local nuktA = U(0x110BA)
local candrabindu = U(0x11080)
local avagraha = "ऽ"
local consonants = "𑂍-𑂯"
local consonant = "[" .. consonants .. "]" .. nuktA .. "?"
local acute = U(0x301) -- combining acute
data["bho"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑂊"},
{"au", "𑂌"},
{"ä", "𑂃"},
{"ö", "𑂋"},
{"ï", "𑂅"},
{"ü", "𑂇"},
{"a", "𑂃"},
{"ā", "𑂄"},
{"i", "𑂅"},
{"ī", "𑂆"},
{"u", "𑂇"},
{"ū", "𑂈"},
{"e", "𑂉"},
{"o", "𑂋"},
-- {"ṝ", ""},
-- {"ṛ", "𑂩𑂱"},
-- {"r̥", "𑂩𑂱"},
-- {"ḹ", ""},
-- {"ḷ", ""},
{"(𑂃)[%-/]([𑂅𑂇])", "%1%2"}, -- a-i, a-u for अइ, अउ; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑂎"},
{"gh", "𑂐"},
{"ch", "𑂓"},
{"jh", "𑂕"},
{"ṭh", "𑂘"},
{"ḍh", "𑂛"},
{"ɽh", "𑂜"},
{"th", "𑂟"},
{"dh", "𑂡"},
{"ph", "𑂤"},
{"bh", "𑂦"},
{"h", "𑂯"},
-- Other stops.
{"k", "𑂍"},
{"g", "𑂏"},
{"c", "𑂒"},
{"j", "𑂔"},
{"ṭ", "𑂗"},
{"ḍ", "𑂙"},
{"ɽ", "𑂚"},
{"t", "𑂞"},
{"d", "𑂠"},
{"p", "𑂣"},
{"b", "𑂥"},
-- Nasals.
{"ṅ", "𑂑"},
{"ñ", "𑂖"},
{"ṇ", "𑂝"},
{"n", "𑂢"},
{"n", "𑂢"},
{"m", "𑂧"},
-- Remaining consonants.
{"y", "𑂨"},
{"r", "𑂩"},
{"l", "𑂪"},
{"v", "𑂫"},
{"ś", "𑂬"},
{"ṣ", "𑂭"},
{"s", "𑂮"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
{"~", candrabindu},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["𑂅"] = U(0x110B1),
["𑂆"] = U(0x110B2),
["𑂇"] = U(0x110B3),
["𑂈"] = U(0x110B4),
["𑂉"] = U(0x110B5),
["𑂊"] = U(0x110B6),
["𑂋"] = U(0x110B7),
["𑂌"] = U(0x110B8),
["𑂄"] = U(0x110B0),
-- ["𑂩𑂱"] = U(0x110C2),
-- ["ॠ"] = "",
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["bho"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["bho"], {"(" .. consonant .. ")𑂃", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["bho-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
--["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["_rh_"] = "ɽh",
-- ["lR"] = "ḷ",
-- ["RR"] = "ṝ",
},
[3] = {
["_r_"] = "ɽ",
["R"] = "ṛ",
},
}
return data
b4pwhi4p4v9jjjg891njbs6j4gbmnk0
Module talk:typing-aids/data/doi
829
125544
193504
2024-11-21T10:45:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/doi]]
193504
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 65136839 || 2022-01-02T04:49:10Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x11837) local visarga = U(0x11838) local virAma = U(0x11839) local nuktA = U(0x1183A) local consonants = "𑠊-𑠫" local consonant = "[" .. consonants .. "]" .. nuktA .. "?" local acute = U(0x301) -- combining acute data["doi"] = { {"ai", "𑠇"}, {"au", "𑠉"}, {"aï", "𑠀𑠂"}, {"aü", "𑠀𑠄"}, {"aö", "𑠀𑠈"}, {"ṃ", anusvAra}, {"ḥ", visarga}, {"kh", "𑠋"}, {"gh", "𑠍"},..."</nowiki>
|}
hvtb2w931ilyfyf7muehioyp9lmmjki
Module:typing-aids/data/doi
828
125545
193505
2024-11-21T10:45:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/doi]] ([[Module talk:typing-aids/data/doi|history]])
193505
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0x11837)
local visarga = U(0x11838)
local virAma = U(0x11839)
local nuktA = U(0x1183A)
local consonants = "𑠊-𑠫"
local consonant = "[" .. consonants .. "]" .. nuktA .. "?"
local acute = U(0x301) -- combining acute
data["doi"] = {
{"ai", "𑠇"},
{"au", "𑠉"},
{"aï", "𑠀𑠂"},
{"aü", "𑠀𑠄"},
{"aö", "𑠀𑠈"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"kh", "𑠋"},
{"gh", "𑠍"},
{"ṅ", "𑠎"},
{"ch", "𑠐"},
{"jh", "𑠒"},
{"ñ", "𑠓"},
{"ṭh", "𑠕"},
{"ḍh", "𑠗"},
{"ṇ", "𑠘"},
{"th", "𑠚"},
{"dh", "𑠜"},
{"n", "𑠝"},
{"ph", "𑠟"},
{"bh", "𑠡"},
{"m", "𑠢"},
{"y", "𑠣"},
{"r", "𑠤"},
{"l", "𑠥"},
{"v", "𑠦"},
{"ś", "𑠧"},
{"ṣ", "𑠨"},
{"s", "𑠩"},
{"a", "𑠀"},
{"ā", "𑠁"},
{"i", "𑠂"},
{"ī", "𑠃"},
{"u", "𑠄"},
{"ū", "𑠅"},
{"e", "𑠆"},
{"o", "𑠈"},
{"r̥̄", "ॠ"},
{"k", "𑠊"},
{"g", "𑠌"},
{"c", "𑠏"},
{"j", "𑠑"},
{"ṭ", "𑠔"},
{"ḍ", "𑠖"},
{"t", "𑠙"},
{"d", "𑠛"},
{"p", "𑠞"},
{"b", "𑠠"},
{"h", "𑠪"},
{'̈', ""},
{"r̥", "ऋ"},
{"ṛ", "𑠫"},
{"(𑠀)[%-/]([𑠂𑠄])", "%1%2"}, -- a-i, a-u for 𑠀𑠂, 𑠀𑠄; must follow rules for "ai", "au"
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
-- this rule must be applied twice because a consonant may only be in one caoture per operation, so "CCC" will only recognize the first two consonants
{"(" .. consonant .. ")" .. "(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")" .. "(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"i", "𑠂"},
{"u", "𑠄"},
}
local vowels = {
["𑠂"] = U(0x1182D),
["𑠄"] = U(0x1182F),
["ऋ"] = U(0x11831),
["𑠆"] = U(0x11833),
["𑠈"] = U(0x11835),
["𑠁"] = U(0x1182C),
["𑠃"] = U(0x1182E),
["𑠅"] = U(0x11830),
["ॠ"] = U(0x11832),
["𑠇"] = U(0x11834),
["𑠉"] = U(0x11836),
}
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["doi"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["doi"], {"(" .. consonant .. ")𑠀", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["doi-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["/"] = acute,
},
[2] = {
["R"] = "r̥",
},
}
return data
fpmdi3j8eg2rekkk1rtjbfbn4awun5z
Module talk:typing-aids/data/fa
829
125546
193506
2024-11-21T10:45:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/fa]]
193506
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 48192422 || 2017-12-01T02:55:10Z || Erutuon || <nowiki>organizing to avoid conflicts</nowiki>
|----
| 48192323 || 2017-12-01T02:33:55Z || Erutuon || <nowiki></nowiki>
|----
| 48192296 || 2017-12-01T02:29:08Z || Erutuon || <nowiki>from [[Module talk:typing-aids#Farsi]]</nowiki>
|}
ih1z59vu818s14zcl6sgjhuseij5fq0
Module:typing-aids/data/fa
828
125547
193507
2024-11-21T10:45:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/fa]] ([[Module talk:typing-aids/data/fa|history]])
193507
Scribunto
text/plain
local data = {
{
["'aa"] = "آ",
},
{
["aa"] = "ا",
["'a"] = "آ",
["'â"] = "آ",
["s'"] = "ش",
["z'"] = "ذ",
["z'"] = "ژ",
["'ye"] = "ﮥ",
["'u"] = "ؤ",
["'ii"] = "ئ",
},
{
["A"] = "ا",
["â"] = "ا",
["ā"] = "ا",
["b"] = "ب",
["p"] = "پ",
["t"] = "ت",
["c"] = "ث",
["ṯ"] = "ث",
["j"] = "ج",
["ǧ"] = "ج",
["č"] = "چ",
["C"] = "چ",
["H"] = "ح",
["ḥ"] = "ح",
["x"] = "خ",
["ḫ"] = "خ",
["ḵ"] = "خ",
["d"] = "د",
["ḏ"] = "ذ",
["ẕ"] = "ذ",
["r"] = "ر",
["z"] = "ز",
["ž"] = "ژ",
["s"] = "س",
["x"] = "ش",
["š"] = "ش",
["S"] = "ص",
["9"] = "ص",
["ṣ"] = "ص",
["D"] = "ض",
["ḍ"] = "ض",
["T"] = "ط",
["6"] = "ط",
["ṭ"] = "ط",
["Z"] = "ظ",
["ẓ"] = "ظ",
["ʿ"] = "ع",
["3"] = "ع",
["E"] = "ع",
["ʕ"] = "ع",
["ğ"] = "غ",
["G"] = "غ",
["ḡ"] = "غ",
["ɣ"] = "غ",
["f"] = "ف",
["q"] = "ق",
["k"] = "ک",
["g"] = "گ",
["l"] = "ل",
["m"] = "م",
["n"] = "ن",
["v"] = "و",
["uu"] = "و",
["U"] = "و",
["w"] = "و",
["ū"] = "و",
["h"] = "ه",
["y"] = "ی",
["ii"] = "ی",
["ī"] = "ی",
["aN"] = "اً",
[","] = "،",
[";"] = "؛",
["?"] = "؟",
["ʔ"] = "ء",
["'"] = "ء",
}
}
return data
59bpwzv5ecqibcx8580pxxuvrea3xya
Module talk:typing-aids/data/gmy
829
125548
193508
2024-11-21T10:45:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/gmy]]
193508
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 53396750 || 2019-06-21T16:08:33Z || Erutuon || <nowiki>localize variable</nowiki>
|----
| 49127254 || 2018-03-09T20:58:17Z || Erutuon || <nowiki>Undo revision 49126825 by [[Special:Contributions/Erutuon|Erutuon]] ([[User talk:Erutuon|talk]]): same flat structure as [[Module:typing-aids/data/hit]]</nowiki>
|----
| 49126825 || 2018-03-09T19:34:20Z || Erutuon || <nowiki>logical order to prevent conflicts</nowiki>
|----
| 49126793 || 2018-03-09T19:30:45Z || Erutuon || <nowiki>intermediate table unnecessary, unless there's going to be another set of replacements in this module</nowiki>
|----
| 49124207 || 2018-03-09T12:14:10Z || PUC || <nowiki></nowiki>
|----
| 49124158 || 2018-03-09T12:08:43Z || PUC || <nowiki>Created page with "replacements = {} replacements["gmy"] = { ["a"] = "𐀀", ["e"] = "𐀁", ["i"] = "𐀂", ["o"] = "𐀃", ["u"] = "𐀄", ["da"] = "𐀅", ["de"] = "𐀆", ["di"]..."</nowiki>
|}
5ijgcwk120mbpbrf80crl4tn6kz677h
Module:typing-aids/data/gmy
828
125549
193509
2024-11-21T10:45:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/gmy]] ([[Module talk:typing-aids/data/gmy|history]])
193509
Scribunto
text/plain
local replacements = {
["a"] = "𐀀",
["e"] = "𐀁",
["i"] = "𐀂",
["o"] = "𐀃",
["u"] = "𐀄",
["da"] = "𐀅",
["de"] = "𐀆",
["di"] = "𐀇",
["do"] = "𐀈",
["du"] = "𐀉",
["ja"] = "𐀊",
["je"] = "𐀋",
-- ji not in Unicode
["jo"] = "𐀍",
["ju"] = "𐀎",
["ka"] = "𐀏",
["ke"] = "𐀐",
["ki"] = "𐀑",
["ko"] = "𐀒",
["ku"] = "𐀓",
["ma"] = "𐀔",
["me"] = "𐀕",
["mi"] = "𐀖",
["mo"] = "𐀗",
["mu"] = "𐀘",
["na"] = "𐀙",
["ne"] = "𐀚",
["ni"] = "𐀛",
["no"] = "𐀜",
["nu"] = "𐀝",
["pa"] = "𐀞",
["pe"] = "𐀟",
["pi"] = "𐀠",
["po"] = "𐀡",
["pu"] = "𐀢",
["qa"] = "𐀣",
["qe"] = "𐀤",
["qi"] = "𐀥",
["qo"] = "𐀦",
-- qu not in Unicode
["ra"] = "𐀨",
["re"] = "𐀩",
["ri"] = "𐀪",
["ro"] = "𐀫",
["ru"] = "𐀬",
["sa"] = "𐀭",
["se"] = "𐀮",
["si"] = "𐀯",
["so"] = "𐀰",
["su"] = "𐀱",
["ta"] = "𐀲",
["te"] = "𐀳",
["ti"] = "𐀴",
["to"] = "𐀵",
["tu"] = "𐀶",
["wa"] = "𐀷",
["we"] = "𐀸",
["wi"] = "𐀹",
["wo"] = "𐀺",
-- wu not in Unicode
["za"] = "𐀼",
["ze"] = "𐀽",
-- zi not in Unicode
["zo"] = "𐀿",
-- zu not in Unicode
["ha"] = "𐁀",
["ai"] = "𐁁",
["au"] = "𐁂",
["dwe"] = "𐁃",
["dwo"] = "𐁄",
["nwo"] = "𐁅",
["phu"] = "𐁆",
["pte"] = "𐁇",
["rya"] = "𐁈",
["rai"] = "𐁉",
["ryo"] = "𐁊",
["tya"] = "𐁋",
["twe"] = "𐁌",
["two"] = "𐁍",
["*18"] = "𐁐",
["*19"] = "𐁑",
["*22"] = "𐁒",
["*34"] = "𐁓",
["*47"] = "𐁔",
["*49"] = "𐁕",
["*56"] = "𐁖",
["*63"] = "𐁗",
["*64"] = "𐁘",
["*79"] = "𐁙",
["*82"] = "𐁚",
["*83"] = "𐁛",
["*86"] = "𐁜",
["*89"] = "𐁝",
}
return replacements
02zntmsqyjmd4ls4a262s9nm7gj1zpq
Module talk:typing-aids/data/hit
829
125550
193510
2024-11-21T10:46:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/hit]]
193510
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 53396747 || 2019-06-21T16:07:05Z || Erutuon || <nowiki>localize variable</nowiki>
|----
| 52347978 || 2019-04-13T14:28:06Z || Tom 144 || <nowiki></nowiki>
|----
| 51250443 || 2019-01-12T16:27:10Z || Tom 144 || <nowiki></nowiki>
|----
| 49002488 || 2018-02-15T15:56:06Z || Tom 144 || <nowiki></nowiki>
|----
| 49002334 || 2018-02-15T15:17:48Z || Tom 144 || <nowiki></nowiki>
|----
| 49002295 || 2018-02-15T15:06:33Z || Tom 144 || <nowiki></nowiki>
|----
| 48457879 || 2018-01-22T02:04:42Z || Tom 144 || <nowiki></nowiki>
|----
| 48429530 || 2018-01-17T18:19:49Z || Tom 144 || <nowiki>wrong diacritic</nowiki>
|----
| 48429517 || 2018-01-17T18:16:40Z || Tom 144 || <nowiki></nowiki>
|----
| 48429330 || 2018-01-17T17:33:33Z || Tom 144 || <nowiki></nowiki>
|----
| 48426858 || 2018-01-17T02:26:31Z || Tom 144 || <nowiki></nowiki>
|----
| 48426845 || 2018-01-17T02:25:07Z || Tom 144 || <nowiki></nowiki>
|----
| 48426094 || 2018-01-16T20:57:14Z || Tom 144 || <nowiki></nowiki>
|----
| 48425800 || 2018-01-16T19:38:48Z || Erutuon || <nowiki>further explanation</nowiki>
|----
| 42186241 || 2017-02-03T02:14:56Z || Erutuon || <nowiki>moved from [[Module:typing-aids/data]]</nowiki>
|}
t8htfcot2e6qeeyieyc8r5g4r9m1g5m
Module:typing-aids/data/hit
828
125551
193511
2024-11-21T10:46:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/hit]] ([[Module talk:typing-aids/data/hit|history]])
193511
Scribunto
text/plain
local replacements = {}
replacements["hit"] = {
-- This converts from regular to diacriticked characters, before the
-- shortcuts below are processed.
-- The apostrophe is used in place of an acute, and a backslash \ in place of
-- a grave. Plain s and h have háček and breve below added to them, and
-- 5 and 9 are converted to superscripts.
-- For example:
-- pe' -> pé -> 𒁉
-- sa -> ša -> 𒊭
-- hal -> ḫal -> 𒄬
-- Thus, plain s and h can't be used in shortcuts.
["pre"] = {
["a'"] = "á", ["e'"] = "é", ["i'"] = "í", ["u'"] = "ú",
["s"] = "š", ["h"] = "ḫ", ["5"] = "₅", ["9"] = "₉", ["a\\"] = "à",
},
-- V
["a"] = "𒀀",
["e"] = "𒂊",
["i"] = "𒄿",
["ú"] = "𒌑",
["u"] = "𒌋",
-- CV
["ba"] = "𒁀", ["be"] = "𒁁", ["bi"] = "𒁉", ["bu"] = "𒁍",
["pa"] = "𒉺", ["pé"] = "𒁉", ["pí"] = "𒁉", ["pu"] = "𒁍",
["da"] = "𒁕", ["de"] = "𒁲", ["di"] = "𒁲", ["du"] = "𒁺",
["ta"] = "𒋫", ["te"] = "𒋼", ["ti"] = "𒋾", ["tu"] = "𒌅",
["ga"] = "𒂵", ["ge"] = "𒄀", ["gi"] = "𒄀", ["gu"] = "𒄖",
["ka"] = "𒅗", ["ke"] = "𒆠", ["ki"] = "𒆠", ["ku"] = "𒆪",
["ḫa"] = "𒄩", ["ḫe"] = "𒄭", ["ḫé"] = "𒃶", ["ḫi"] = "𒄭", ["ḫu"] = "𒄷",
["la"] = "𒆷", ["le"] = "𒇷", ["li"] = "𒇷", ["lu"] = "𒇻",
["ma"] = "𒈠", ["me"] = "𒈨", ["mé"] = "𒈪", ["mi"] = "𒈪", ["mu"] = "𒈬",
["na"] = "𒈾", ["ne"] = "𒉈", ["né"] = "𒉌", ["ni"] = "𒉌", ["nu"] = "𒉡",
["ra"] = "𒊏", ["re"] = "𒊑", ["ri"] = "𒊑", ["ru"] = "𒊒",
["ša"] = "𒊭", ["še"] = "𒊺", ["ši"] = "𒅆", ["šu"] = "𒋗", ["šú"] = "𒋙",
["wa"] = "𒉿", ["wi"]= "𒃾", ["wi₅"]= "𒃾",
["ya"] = "𒅀",
["za"] = "𒍝", ["ze"] = "𒍣", ["ze'"] = "𒍢", ["zé"] = "𒍢", ["zi"] = "𒍣", ["zu"] = "𒍪",
-- VC
["ab"] = "𒀊", ["eb"] = "𒅁", ["ib"] = "𒅁", ["ub"] = "𒌒",
["ap"] = "𒀊", ["ep"] = "𒅁", ["ip"] = "𒅁", ["up"] = "𒌒",
["ad"] = "𒀜", ["ed"] = "𒀉", ["id"] = "𒀉", ["ud"] = "𒌓",
["at"] = "𒀜", ["et"] = "𒀉", ["it"] = "𒀉", ["ut"] = "𒌓",
["ag"] = "𒀝", ["eg"] = "𒅅", ["ig"] = "𒅅", ["ug"] = "𒊌",
["ak"] = "𒀝", ["ek"] = "𒅅", ["ik"] = "𒅅", ["uk"] = "𒊌",
["aḫ"] = "𒄴", ["eḫ"] = "𒄴", ["iḫ"]= "𒄴", ["uḫ"] = "𒄴",
["al"] = "𒀠", ["el"] = "𒂖", ["il"] = "𒅋", ["ul"] = "𒌌",
["am"] = "𒄠", ["em"] = "𒅎", ["im"] = "𒅎", ["um"] = "𒌝",
["an"] = "𒀭", ["en"] = "𒂗", ["in"] = "𒅔", ["un"] = "𒌦",
["ar"] = "𒅈", ["er"] = "𒅕", ["ir"] = "𒅕", ["ur"] = "𒌨", ["úr"] = "𒌫",
["aš"] = "𒀸", ["eš"] = "𒌍", ["iš"] = "𒅖", ["uš"] = "𒍑",
["az"] = "𒊍", ["ez"] = "𒄑", ["iz"] = "𒄑", ["uz"] = "𒊻",
-- VCV
["ḫal"] = "𒄬", ["ḫab"] = "𒆸", ["ḫap"] = "𒆸", ["ḫaš"] = "𒋻", ["ḫad"] = "𒉺", ["ḫat"] = "𒉺",
["ḫul"] = "𒅆", ["ḫub"] = "𒄽", ["ḫup"] = "𒄽", ["ḫar"] = "𒄯", ["ḫur"] = "𒄯",
["gal"] = "𒃲", ["kal"] = "𒆗", ["gal₉"] = "𒆗", ["kam"] = "𒄰", ["gám"] = "𒄰",
["kán"] = "𒃷", ["gán"] = "𒃷", ["kab"] = "𒆏", ["kap"] = "𒆏", ["gáb"] = "𒆏", ["gáp"] = "𒆏",
["kar"] = "𒋼𒀀", ["kàr"] = "𒃼", ["gàr"] = "𒃼", ["kaš"] = "𒁉", ["gaš"] = "𒁉",
["kad"] = "𒃰", ["kat"] = "𒃰", ["gad"] = "𒃰", ["gat"] = "𒃰", ["gaz"] = "𒄤",
-- kib, kip are not encoded
["kir"] = "𒄫", ["gir"] = "𒄫", ["kiš"] = "𒆧", ["kid₉"] = "𒃰", ["kit₉"] = "𒃰",
["kal"] = "𒆗", ["kul"] = "𒆰", ["kúl"] = "𒄢", ["gul"] = "𒄢",
["kum"] = "𒄣", ["gum"] = "𒄣", ["kur"] = "𒆳", ["kùr"] = "𒄥", ["gur"] = "𒄥",
["lal"] = "𒇲", ["lam"] = "𒇴", ["lig"] = "𒌨", ["lik"] = "𒌨", ["liš"] = "𒇺", ["luḫ"] = "𒈛", ["lum"] = "𒈝",
["maḫ"] = "𒈤", ["man"] = "𒎙", ["mar"] = "𒈥", ["maš"] = "𒈦", ["meš"] = "𒈨𒌍",
["mil"] = "𒅖", ["mel"] = "𒅖", ["miš"] = "𒈩", ["mur"] = "𒄯", ["mut"] = "𒄷𒄭",
["nam"] = "𒉆", ["nab"] = "𒀮", ["nap"] = "𒀮", ["nir"] = "𒉪", ["niš"] = "𒎙",
["pal"] = "𒁄", ["bal"] = "𒁄", ["pár"] = "𒈦", ["bar"] = "𒈦", ["paš"] = "𒄫",
["pád"] = "𒁁", ["pát"] = "𒁁", ["píd"] = "𒁁", ["pít"] = "𒁁", ["píl"] = "𒉋", ["bíl"] = "𒉋",
["pir"] = "𒌓", ["piš"] = "𒄫", ["biš"] = "𒄫", ["pùš"] = "𒄫", ["pur"] = "𒁓", ["bur"] = "𒁓",
["rad"] = "𒋥", ["rat"] = "𒋥", ["riš"] = "𒊕",
["šaḫ"] = "𒋚", ["šag"] = "𒊕", ["šak"] = "𒊕", ["šal"] = "𒊩", ["šam"] = "𒌑", ["šàm"] = "𒉓",
["šab"] = "𒉺𒅁", ["šap"] = "𒉺𒅁", ["šar"] = "𒊬", ["šìp"] = "𒉺𒅁", ["šir"] = "𒋓", ["šum"] = "𒋳", ["šur"] = "𒋩",
["taḫ"] = "𒈭", ["daḫ"] = "𒈭", ["túḫ"] = "𒈭", ["tág"] = "𒁖", ["ták"] = "𒁖", ["dag"] = "𒁖", ["dak"] = "𒁖",
["tal"] = "𒊑", ["dal"] = "𒊑", ["tám"] = "𒁮", ["dam"] = "𒁮", ["tan"] = "𒆗", ["dan"] = "𒆗",
["tab"] = "𒋰", ["tap"] = "𒋰", ["dáb"] = "𒋰", ["dáp"] = "𒋰", ["tar"] = "𒋻",
["táš"] = "𒁹", ["dáš"] = "𒁹", ["tiš"] = "𒁹", ["diš"] = "𒁹",
["tàš"] = "𒀾", ["tin"] = "𒁷", ["tén"] = "𒁷", ["tim"] = "𒁴", ["dim"] = "𒁴",
["dir"] = "𒋛𒀀", ["tir"] = "𒌁", ["ter"] = "𒌁", ["tíś"] = "𒌨", ["túl"] = "𒇥",
["tum"] = "𒌈", ["dum"] = "𒌈", ["tub"] = "𒁾", ["tup"] = "𒁾", ["dub"] = "𒁾", ["dup"] = "𒁾",
["túr"] = "𒄙", ["dur"] = "𒄙",
["zul"] = "𒂄", ["zum"] = "𒍮",
-- Determiners
["DIDLI"] ="𒀸", ["DINGIR"]="𒀭", ["DUG"]="𒂁", ["É"]="𒂍", ["GAD"]="𒃰", ["GI"]="𒄀",
["GIŠ"]="𒄑", ["GUD"]="𒄞", ["ḪI.A"]="𒄭𒀀", ["ḪUR.SAG"]="𒄯𒊕", ["IM"]="𒅎", ["ITU"]="𒌚",
["KAM"]="𒄰", ["KI"]="𒆠", ["KUR"]="𒆳", ["KUŠ"]="𒋢", ["LÚ"]="𒇽", ["MEŠ"]="𒈨𒌍",["MUL"]="𒀯",
["MUNUS"]="𒊩", ["MUŠ"]="𒈲", ["MUŠEN"]="𒄷", ["NINDA"]="𒃻", ["SAR"]="𒊬",
["SI"]="𒋛", ["SIG"]="𒋠", ["TÚG"]="𒌆", ["𒌑"]="𒌑", ["URU"]="𒌷", ["URUDU"]="𒍐", ["UZU"]="𒍜",
-- Logograms
["KASKAL"]="𒆜", ["LUGAL"]="𒈗", ["GÌR"]="𒄊", ["GÍR"]="𒄈", ["IGI"]="𒃲", ["SÍG"]="𒋠" ,["IŠKUR"]="𒅎"
}
replacements["hit-tr"] = {
["a'"] = "á", ["e'"] = "é", ["i'"] = "í", ["u'"] = "ú",
["s"] = "š", ["s'"] = "ś", ["h"] = "ḫ", ["5"] = "₅", ["9"] = "₉", ["a\\"] = "à",
}
return replacements
jdw7x5yitrp3g0qctoyksauisjsdr51
Module talk:typing-aids/data/hy
829
125552
193512
2024-11-21T10:46:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/hy]]
193512
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 79210057 || 2024-05-11T10:27:06Z || Vahagn Petrosyan || <nowiki>replace յ̵ with ֈ and ʿ with ʻ</nowiki>
|----
| 42692268 || 2017-04-24T05:30:02Z || Vahagn Petrosyan || <nowiki>these digraphs are not used</nowiki>
|----
| 42692225 || 2017-04-24T05:11:52Z || Benwing2 || <nowiki>fix bugs</nowiki>
|----
| 42692203 || 2017-04-24T04:50:30Z || Benwing2 || <nowiki>Created page with "local data = {} local U = mw.ustring.char local macron = U(0x304) -- macron local dot_above = U(0x307) -- dot above local acute = U(0x301) -- acute local caron = U(0x30C) --..."</nowiki>
|}
8y7lwgx30ep7w0rvtl0ohi3vds75xhq
Module:typing-aids/data/hy
828
125553
193513
2024-11-21T10:46:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/hy]] ([[Module talk:typing-aids/data/hy|history]])
193513
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local macron = U(0x304) -- macron
local dot_above = U(0x307) -- dot above
local acute = U(0x301) -- acute
local caron = U(0x30C) -- caron
data["hy"] = {
[1] = { -- sequences involving u
["U" .. acute] = "Ո՛ւ",
["u" .. acute] = "ո՛ւ",
["U<sup>!</sup>"] = "Ո՜ւ",
["u<sup>!</sup>"] = "ո՜ւ",
["U!"] = "Ո՜ւ",
["u!"] = "ո՜ւ",
["U<sup>%?</sup>"] = "Ո՞ւ",
["u<sup>%?</sup>"] = "ո՞ւ",
["U%?"] = "Ո՞ւ",
["u%?"] = "ո՞ւ",
},
[2] = { -- remaining special-cased chars in [[Module:Armn-translit]]
["ɦ"] = "ֈ",
["U"] = "Ու",
["u"] = "ու",
["Ü"] = "Ո̈ւ",
["ü"] = "ո̈ւ",
},
[3] = { -- remaining sequences in [[Module:Armn-translit]]
["tʻ"] = "թ",
["čʻ"] = "չ",
["cʻ"] = "ց",
["pʻ"] = "փ",
["kʻ"] = "ք",
["ew"] = "և",
["Tʻ"] = "Թ",
["Čʻ"] = "Չ",
["Cʻ"] = "Ց",
["Pʻ"] = "Փ",
["Kʻ"] = "Ք",
["<sup>!</sup>"] = "՜",
["<sup>?</sup>"] = "՞",
},
[4] = { -- remaining single chars in [[Module:Armn-translit]]
["a"] = "ա",
["b"] = "բ",
["g"] = "գ",
["d"] = "դ",
["e"] = "ե",
["z"] = "զ",
["ē"] = "է",
["ə"] = "ը",
["ž"] = "ժ",
["i"] = "ի",
["l"] = "լ",
["x"] = "խ",
["c"] = "ծ",
["k"] = "կ",
["h"] = "հ",
["j"] = "ձ",
["ł"] = "ղ",
["č"] = "ճ",
["m"] = "մ",
["y"] = "յ",
["n"] = "ն",
["š"] = "շ",
["o"] = "ո",
["p"] = "պ",
["ǰ"] = "ջ",
["ṙ"] = "ռ",
["s"] = "ս",
["v"] = "վ",
["t"] = "տ",
["r"] = "ր",
["w"] = "ւ",
["ō"] = "օ",
["f"] = "ֆ",
["A"] = "Ա",
["B"] = "Բ",
["G"] = "Գ",
["D"] = "Դ",
["E"] = "Ե",
["Z"] = "Զ",
["Ē"] = "Է",
["Ə"] = "Ը",
["Ž"] = "Ժ",
["I"] = "Ի",
["L"] = "Լ",
["X"] = "Խ",
["C"] = "Ծ",
["K"] = "Կ",
["H"] = "Հ",
["J"] = "Ձ",
["Ł"] = "Ղ",
["Č"] = "Ճ",
["M"] = "Մ",
["Y"] = "Յ",
["N"] = "Ն",
["Š"] = "Շ",
["O"] = "Ո",
["P"] = "Պ",
["J̌"] = "Ջ",
["Ṙ"] = "Ռ",
["S"] = "Ս",
["V"] = "Վ",
["T"] = "Տ",
["R"] = "Ր",
["W"] = "Ւ",
["Ō"] = "Օ",
["F"] = "Ֆ",
-- punctuation
[","] = "՝",
["%."] = "։",
[";"] = "․",
[acute] = "՛",
["!"] = "՜",
["%?"] = "՞",
--["%."] = "՟", --obsolete abbreviation
["%-"] = "֊",
["’"] = "՚",
["“"] = "«",
["”"] = "»",
["ʻ"] = "ՙ",
},
}
data["hy-tr"] = {
[1] = {
["l_"] = "ł",
["L_"] = "Ł",
["@%*"] = "Ə",
},
[2] = {
["_"] = macron,
["@"] = "ə",
["ǝ"] = "ə", -- map "wrong" schwa to right one
["%*"] = dot_above,
["`"] = "ʻ",
["'"] = acute,
["%^"] = caron,
},
}
return data
jmrkydj9dfmuz6j0rlag3puahrhscsw
Module talk:typing-aids/data/ja
829
125554
193514
2024-11-21T10:46:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/ja]]
193514
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 60390590 || 2020-09-13T09:28:50Z || Fish bowl || <nowiki></nowiki>
|----
| 60390586 || 2020-09-13T09:27:32Z || Fish bowl || <nowiki></nowiki>
|----
| 47402992 || 2017-09-01T21:17:26Z || Fish bowl || <nowiki></nowiki>
|----
| 47402990 || 2017-09-01T21:17:14Z || Fish bowl || <nowiki></nowiki>
|----
| 47399920 || 2017-09-01T09:27:53Z || Fish bowl || <nowiki>['wye'] = 'ゑ', ['wyi'] = 'ゐ',</nowiki>
|----
| 47395908 || 2017-09-01T08:07:42Z || Fish bowl || <nowiki></nowiki>
|----
| 47394895 || 2017-09-01T07:52:07Z || Fish bowl || <nowiki></nowiki>
|----
| 47394885 || 2017-09-01T07:51:51Z || Fish bowl || <nowiki>Undo revision 47394776 by [[Special:Contributions/Suzukaze-c|Suzukaze-c]] ([[User talk:Suzukaze-c|talk]])</nowiki>
|----
| 47394776 || 2017-09-01T07:49:01Z || Fish bowl || <nowiki></nowiki>
|----
| 47394567 || 2017-09-01T07:43:23Z || Fish bowl || <nowiki></nowiki>
|----
| 47394542 || 2017-09-01T07:42:35Z || Fish bowl || <nowiki>katakana</nowiki>
|----
| 47394499 || 2017-09-01T07:39:54Z || Fish bowl || <nowiki>how does this work</nowiki>
|}
0fayo1jbdn12a1unpqwgy5k1s5rn3n5
Module:typing-aids/data/ja
828
125555
193515
2024-11-21T10:46:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/ja]] ([[Module talk:typing-aids/data/ja|history]])
193515
Scribunto
text/plain
--[[
based on https://support.microsoft.com/ja-jp/help/883232
- all caps → katakana
- "ぃぇ [ye]" is a typo for "いぇ [ye]"
- "ゐ [wi]" conflicts with "うぃ [wi]" and has been left out
- "ゑ [we]" conflicts with "うぇ [we]" and has been left out
- "ゐ [wyi]" and "ゑ [wye]" are not listed but has been added
- "ヵ [lka/xka]" and "ヶ [lke/xke]" have been changed to "ゕ [lka/xka]", "ゖ [lke/xke]", "ヵ [LKA/XKA]", and "ヶ [LKE/XKE]"
- "ん [nn]" must be prioritized -- testcase "こんよう [konnyou]"
]]
local data = {}
data["ja"] = {
[1] = {
['([bcdfghjklmpqrstvwxyz])%1'] = 'っ%1',
['([BCDFGHJKLMPQRSTVWXYZ])%1'] = 'ッ%1',
},
[2] = {
['ltsu'] = 'っ',
['nn'] = 'ん',
['LTSU'] = 'ッ',
['NN'] = 'ン',
},
[3] = {
['bya'] = 'びゃ',
['bye'] = 'びぇ',
['byi'] = 'びぃ',
['byo'] = 'びょ',
['byu'] = 'びゅ',
['cha'] = 'ちゃ',
['che'] = 'ちぇ',
['chi'] = 'ち',
['cho'] = 'ちょ',
['chu'] = 'ちゅ',
['cya'] = 'ちゃ',
['cye'] = 'ちぇ',
['cyi'] = 'ちぃ',
['cyo'] = 'ちょ',
['cyu'] = 'ちゅ',
['dha'] = 'でゃ',
['dhe'] = 'でぇ',
['dhi'] = 'でぃ',
['dho'] = 'でょ',
['dhu'] = 'でゅ',
['dwa'] = 'どぁ',
['dwe'] = 'どぇ',
['dwi'] = 'どぃ',
['dwo'] = 'どぉ',
['dwu'] = 'どぅ',
['dya'] = 'ぢゃ',
['dye'] = 'ぢぇ',
['dyi'] = 'ぢぃ',
['dyo'] = 'ぢょ',
['dyu'] = 'ぢゅ',
['fwa'] = 'ふぁ',
['fwe'] = 'ふぇ',
['fwi'] = 'ふぃ',
['fwo'] = 'ふぉ',
['fwu'] = 'ふぅ',
['fya'] = 'ふゃ',
['fye'] = 'ふぇ',
['fyi'] = 'ふぃ',
['fyo'] = 'ふょ',
['fyu'] = 'ふゅ',
['gwa'] = 'ぐぁ',
['gwe'] = 'ぐぇ',
['gwi'] = 'ぐぃ',
['gwo'] = 'ぐぉ',
['gwu'] = 'ぐぅ',
['gya'] = 'ぎゃ',
['gye'] = 'ぎぇ',
['gyi'] = 'ぎぃ',
['gyo'] = 'ぎょ',
['gyu'] = 'ぎゅ',
['hya'] = 'ひゃ',
['hye'] = 'ひぇ',
['hyi'] = 'ひぃ',
['hyo'] = 'ひょ',
['hyu'] = 'ひゅ',
['jya'] = 'じゃ',
['jye'] = 'じぇ',
['jyi'] = 'じぃ',
['jyo'] = 'じょ',
['jyu'] = 'じゅ',
['kwa'] = 'くぁ',
['kya'] = 'きゃ',
['kye'] = 'きぇ',
['kyi'] = 'きぃ',
['kyo'] = 'きょ',
['kyu'] = 'きゅ',
['lka'] = 'ゕ',
['lke'] = 'ゖ',
['ltu'] = 'っ',
['lwa'] = 'ゎ',
['lya'] = 'ゃ',
['lye'] = 'ぇ',
['lyi'] = 'ぃ',
['lyo'] = 'ょ',
['lyu'] = 'ゅ',
['mya'] = 'みゃ',
['mye'] = 'みぇ',
['myi'] = 'みぃ',
['myo'] = 'みょ',
['myu'] = 'みゅ',
['nya'] = 'にゃ',
['nye'] = 'にぇ',
['nyi'] = 'にぃ',
['nyo'] = 'にょ',
['nyu'] = 'にゅ',
['pya'] = 'ぴゃ',
['pye'] = 'ぴぇ',
['pyi'] = 'ぴぃ',
['pyo'] = 'ぴょ',
['pyu'] = 'ぴゅ',
['qwa'] = 'くぁ',
['qwe'] = 'くぇ',
['qwi'] = 'くぃ',
['qwo'] = 'くぉ',
['qwu'] = 'くぅ',
['qya'] = 'くゃ',
['qye'] = 'くぇ',
['qyi'] = 'くぃ',
['qyo'] = 'くょ',
['qyu'] = 'くゅ',
['rya'] = 'りゃ',
['rye'] = 'りぇ',
['ryi'] = 'りぃ',
['ryo'] = 'りょ',
['ryu'] = 'りゅ',
['sha'] = 'しゃ',
['she'] = 'しぇ',
['shi'] = 'し',
['sho'] = 'しょ',
['shu'] = 'しゅ',
['swa'] = 'すぁ',
['swe'] = 'すぇ',
['swi'] = 'すぃ',
['swo'] = 'すぉ',
['swu'] = 'すぅ',
['sya'] = 'しゃ',
['sye'] = 'しぇ',
['syi'] = 'しぃ',
['syo'] = 'しょ',
['syu'] = 'しゅ',
['tha'] = 'てゃ',
['the'] = 'てぇ',
['thi'] = 'てぃ',
['tho'] = 'てょ',
['thu'] = 'てゅ',
['tsa'] = 'つぁ',
['tse'] = 'つぇ',
['tsi'] = 'つぃ',
['tso'] = 'つぉ',
['tsu'] = 'つ',
['twa'] = 'とぁ',
['twe'] = 'とぇ',
['twi'] = 'とぃ',
['two'] = 'とぉ',
['twu'] = 'とぅ',
['tya'] = 'ちゃ',
['tye'] = 'ちぇ',
['tyi'] = 'ちぃ',
['tyo'] = 'ちょ',
['tyu'] = 'ちゅ',
['vya'] = 'ヴゃ',
['vye'] = 'ヴぇ',
['vyi'] = 'ヴぃ',
['vyo'] = 'ヴょ',
['vyu'] = 'ヴゅ',
['wha'] = 'うぁ',
['whe'] = 'うぇ',
['whi'] = 'うぃ',
['who'] = 'うぉ',
['whu'] = 'う',
['wye'] = 'ゑ',
['wyi'] = 'ゐ',
['xka'] = 'ゕ',
['xke'] = 'ゖ',
['xtu'] = 'っ',
['xwa'] = 'ゎ',
['xya'] = 'ゃ',
['xye'] = 'ぇ',
['xyi'] = 'ぃ',
['xyo'] = 'ょ',
['xyu'] = 'ゅ',
['zya'] = 'じゃ',
['zye'] = 'じぇ',
['zyi'] = 'じぃ',
['zyo'] = 'じょ',
['zyu'] = 'じゅ',
['BYA'] = 'ビャ',
['BYE'] = 'ビェ',
['BYI'] = 'ビィ',
['BYO'] = 'ビョ',
['BYU'] = 'ビュ',
['CHA'] = 'チャ',
['CHE'] = 'チェ',
['CHI'] = 'チ',
['CHO'] = 'チョ',
['CHU'] = 'チュ',
['CYA'] = 'チャ',
['CYE'] = 'チェ',
['CYI'] = 'チィ',
['CYO'] = 'チョ',
['CYU'] = 'チュ',
['DHA'] = 'デャ',
['DHE'] = 'デェ',
['DHI'] = 'ディ',
['DHO'] = 'デョ',
['DHU'] = 'デュ',
['DWA'] = 'ドァ',
['DWE'] = 'ドェ',
['DWI'] = 'ドィ',
['DWO'] = 'ドォ',
['DWU'] = 'ドゥ',
['DYA'] = 'ヂャ',
['DYE'] = 'ヂェ',
['DYI'] = 'ヂィ',
['DYO'] = 'ヂョ',
['DYU'] = 'ヂュ',
['FWA'] = 'ファ',
['FWE'] = 'フェ',
['FWI'] = 'フィ',
['FWO'] = 'フォ',
['FWU'] = 'フゥ',
['FYA'] = 'フャ',
['FYE'] = 'フェ',
['FYI'] = 'フィ',
['FYO'] = 'フョ',
['FYU'] = 'フュ',
['GWA'] = 'グァ',
['GWE'] = 'グェ',
['GWI'] = 'グィ',
['GWO'] = 'グォ',
['GWU'] = 'グゥ',
['GYA'] = 'ギャ',
['GYE'] = 'ギェ',
['GYI'] = 'ギィ',
['GYO'] = 'ギョ',
['GYU'] = 'ギュ',
['HYA'] = 'ヒャ',
['HYE'] = 'ヒェ',
['HYI'] = 'ヒィ',
['HYO'] = 'ヒョ',
['HYU'] = 'ヒュ',
['JYA'] = 'ジャ',
['JYE'] = 'ジェ',
['JYI'] = 'ジィ',
['JYO'] = 'ジョ',
['JYU'] = 'ジュ',
['KWA'] = 'クァ',
['KYA'] = 'キャ',
['KYE'] = 'キェ',
['KYI'] = 'キィ',
['KYO'] = 'キョ',
['KYU'] = 'キュ',
['LKA'] = 'ヵ',
['LKE'] = 'ヶ',
['LTU'] = 'ッ',
['LWA'] = 'ヮ',
['LYA'] = 'ャ',
['LYE'] = 'ェ',
['LYI'] = 'ィ',
['LYO'] = 'ョ',
['LYU'] = 'ュ',
['MYA'] = 'ミャ',
['MYE'] = 'ミェ',
['MYI'] = 'ミィ',
['MYO'] = 'ミョ',
['MYU'] = 'ミュ',
['NYA'] = 'ニャ',
['NYE'] = 'ニェ',
['NYI'] = 'ニィ',
['NYO'] = 'ニョ',
['NYU'] = 'ニュ',
['PYA'] = 'ピャ',
['PYE'] = 'ピェ',
['PYI'] = 'ピィ',
['PYO'] = 'ピョ',
['PYU'] = 'ピュ',
['QWA'] = 'クァ',
['QWE'] = 'クェ',
['QWI'] = 'クィ',
['QWO'] = 'クォ',
['QWU'] = 'クゥ',
['QYA'] = 'クャ',
['QYE'] = 'クェ',
['QYI'] = 'クィ',
['QYO'] = 'クョ',
['QYU'] = 'クュ',
['RYA'] = 'リャ',
['RYE'] = 'リェ',
['RYI'] = 'リィ',
['RYO'] = 'リョ',
['RYU'] = 'リュ',
['SHA'] = 'シャ',
['SHE'] = 'シェ',
['SHI'] = 'シ',
['SHO'] = 'ショ',
['SHU'] = 'シュ',
['SWA'] = 'スァ',
['SWE'] = 'スェ',
['SWI'] = 'スィ',
['SWO'] = 'スォ',
['SWU'] = 'スゥ',
['SYA'] = 'シャ',
['SYE'] = 'シェ',
['SYI'] = 'シィ',
['SYO'] = 'ショ',
['SYU'] = 'シュ',
['THA'] = 'テャ',
['THE'] = 'テェ',
['THI'] = 'ティ',
['THO'] = 'テョ',
['THU'] = 'テュ',
['TSA'] = 'ツァ',
['TSE'] = 'ツェ',
['TSI'] = 'ツィ',
['TSO'] = 'ツォ',
['TSU'] = 'ツ',
['TWA'] = 'トァ',
['TWE'] = 'トェ',
['TWI'] = 'トィ',
['TWO'] = 'トォ',
['TWU'] = 'トゥ',
['TYA'] = 'チャ',
['TYE'] = 'チェ',
['TYI'] = 'チィ',
['TYO'] = 'チョ',
['TYU'] = 'チュ',
['VYA'] = 'ヴャ',
['VYE'] = 'ヴェ',
['VYI'] = 'ヴィ',
['VYO'] = 'ヴョ',
['VYU'] = 'ヴュ',
['WHA'] = 'ウァ',
['WHE'] = 'ウェ',
['WHI'] = 'ウィ',
['WHO'] = 'ウォ',
['WHU'] = 'ウ',
['WYE'] = 'ヱ',
['WYI'] = 'ヰ',
['XKA'] = 'ヵ',
['XKE'] = 'ヶ',
['XTU'] = 'ッ',
['XWA'] = 'ヮ',
['XYA'] = 'ャ',
['XYE'] = 'ェ',
['XYI'] = 'ィ',
['XYO'] = 'ョ',
['XYU'] = 'ュ',
['ZYA'] = 'ジャ',
['ZYE'] = 'ジェ',
['ZYI'] = 'ジィ',
['ZYO'] = 'ジョ',
['ZYU'] = 'ジュ',
},
[4] = {
['ba'] = 'ば',
['be'] = 'べ',
['bi'] = 'び',
['bo'] = 'ぼ',
['bu'] = 'ぶ',
['ca'] = 'か',
['ce'] = 'せ',
['ci'] = 'し',
['co'] = 'こ',
['cu'] = 'く',
['da'] = 'だ',
['de'] = 'で',
['di'] = 'ぢ',
['do'] = 'ど',
['du'] = 'づ',
['fa'] = 'ふぁ',
['fe'] = 'ふぇ',
['fi'] = 'ふぃ',
['fo'] = 'ふぉ',
['fu'] = 'ふ',
['ga'] = 'が',
['ge'] = 'げ',
['gi'] = 'ぎ',
['go'] = 'ご',
['gu'] = 'ぐ',
['ha'] = 'は',
['he'] = 'へ',
['hi'] = 'ひ',
['ho'] = 'ほ',
['hu'] = 'ふ',
['ja'] = 'じゃ',
['je'] = 'じぇ',
['ji'] = 'じ',
['jo'] = 'じょ',
['ju'] = 'じゅ',
['ka'] = 'か',
['ke'] = 'け',
['ki'] = 'き',
['ko'] = 'こ',
['ku'] = 'く',
['la'] = 'ぁ',
['le'] = 'ぇ',
['li'] = 'ぃ',
['lo'] = 'ぉ',
['lu'] = 'ぅ',
['ma'] = 'ま',
['me'] = 'め',
['mi'] = 'み',
['mo'] = 'も',
['mu'] = 'む',
['n\''] = 'ん', -- [n']
['na'] = 'な',
['ne'] = 'ね',
['ni'] = 'に',
--['nn'] = 'ん',
['no'] = 'の',
['nu'] = 'ぬ',
['pa'] = 'ぱ',
['pe'] = 'ぺ',
['pi'] = 'ぴ',
['po'] = 'ぽ',
['pu'] = 'ぷ',
['qa'] = 'くぁ',
['qe'] = 'くぇ',
['qi'] = 'くぃ',
['qo'] = 'くぉ',
['qu'] = 'く',
['ra'] = 'ら',
['re'] = 'れ',
['ri'] = 'り',
['ro'] = 'ろ',
['ru'] = 'る',
['sa'] = 'さ',
['se'] = 'せ',
['si'] = 'し',
['so'] = 'そ',
['su'] = 'す',
['ta'] = 'た',
['te'] = 'て',
['ti'] = 'ち',
['to'] = 'と',
['tu'] = 'つ',
['va'] = 'ヴぁ',
['ve'] = 'ヴぇ',
['vi'] = 'ヴぃ',
['vo'] = 'ヴぉ',
['vu'] = 'ヴ',
['wa'] = 'わ',
['we'] = 'うぇ',
['wi'] = 'うぃ',
['wo'] = 'を',
['wu'] = 'う',
['xa'] = 'ぁ',
['xe'] = 'ぇ',
['xi'] = 'ぃ',
['xn'] = 'ん',
['xo'] = 'ぉ',
['xu'] = 'ぅ',
['ya'] = 'や',
['ye'] = 'ぃぇ',
['yi'] = 'い',
['yo'] = 'よ',
['yu'] = 'ゆ',
['za'] = 'ざ',
['ze'] = 'ぜ',
['zi'] = 'じ',
['zo'] = 'ぞ',
['zu'] = 'ず',
['BA'] = 'バ',
['BE'] = 'ベ',
['BI'] = 'ビ',
['BO'] = 'ボ',
['BU'] = 'ブ',
['CA'] = 'カ',
['CE'] = 'セ',
['CI'] = 'シ',
['CO'] = 'コ',
['CU'] = 'ク',
['DA'] = 'ダ',
['DE'] = 'デ',
['DI'] = 'ヂ',
['DO'] = 'ド',
['DU'] = 'ヅ',
['FA'] = 'ファ',
['FE'] = 'フェ',
['FI'] = 'フィ',
['FO'] = 'フォ',
['FU'] = 'フ',
['GA'] = 'ガ',
['GE'] = 'ゲ',
['GI'] = 'ギ',
['GO'] = 'ゴ',
['GU'] = 'グ',
['HA'] = 'ハ',
['HE'] = 'ヘ',
['HI'] = 'ヒ',
['HO'] = 'ホ',
['HU'] = 'フ',
['JA'] = 'ジャ',
['JE'] = 'ジェ',
['JI'] = 'ジ',
['JO'] = 'ジョ',
['JU'] = 'ジュ',
['KA'] = 'カ',
['KE'] = 'ケ',
['KI'] = 'キ',
['KO'] = 'コ',
['KU'] = 'ク',
['LA'] = 'ァ',
['LE'] = 'ェ',
['LI'] = 'ィ',
['LO'] = 'ォ',
['LU'] = 'ゥ',
['MA'] = 'マ',
['ME'] = 'メ',
['MI'] = 'ミ',
['MO'] = 'モ',
['MU'] = 'ム',
['N\''] = 'ン', -- [N']
['NA'] = 'ナ',
['NE'] = 'ネ',
['NI'] = 'ニ',
--['NN'] = 'ン',
['NO'] = 'ノ',
['NU'] = 'ヌ',
['PA'] = 'パ',
['PE'] = 'ペ',
['PI'] = 'ピ',
['PO'] = 'ポ',
['PU'] = 'プ',
['QA'] = 'クァ',
['QE'] = 'クェ',
['QI'] = 'クィ',
['QO'] = 'クォ',
['QU'] = 'ク',
['RA'] = 'ラ',
['RE'] = 'レ',
['RI'] = 'リ',
['RO'] = 'ロ',
['RU'] = 'ル',
['SA'] = 'サ',
['SE'] = 'セ',
['SI'] = 'シ',
['SO'] = 'ソ',
['SU'] = 'ス',
['TA'] = 'タ',
['TE'] = 'テ',
['TI'] = 'チ',
['TO'] = 'ト',
['TU'] = 'ツ',
['VA'] = 'ヴァ',
['VE'] = 'ヴェ',
['VI'] = 'ヴィ',
['VO'] = 'ヴォ',
['VU'] = 'ヴ',
['WA'] = 'ワ',
['WE'] = 'ウェ',
['WI'] = 'ウィ',
['WO'] = 'ヲ',
['WU'] = 'ウ',
['XA'] = 'ァ',
['XE'] = 'ェ',
['XI'] = 'ィ',
['XN'] = 'ン',
['XO'] = 'ォ',
['XU'] = 'ゥ',
['YA'] = 'ヤ',
['YE'] = 'ィェ',
['YI'] = 'イ',
['YO'] = 'ヨ',
['YU'] = 'ユ',
['ZA'] = 'ザ',
['ZE'] = 'ゼ',
['ZI'] = 'ジ',
['ZO'] = 'ゾ',
['ZU'] = 'ズ',
},
[5] = {
['a'] = 'あ',
['e'] = 'え',
['i'] = 'い',
['n'] = 'ん',
['o'] = 'お',
['u'] = 'う',
['A'] = 'ア',
['E'] = 'エ',
['I'] = 'イ',
['N'] = 'ン',
['O'] = 'オ',
['U'] = 'ウ',
},
[6] = {
['-'] = 'ー',
['/'] = '・',
},
}
return data
dhsbqt1f10sn8dbn6oy3odpyt6dnzwm
Module talk:typing-aids/data/kn
829
125556
193516
2024-11-21T10:47:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/kn]]
193516
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 63760257 || 2021-08-26T12:45:13Z || Kutchkutch || <nowiki></nowiki>
|----
| 63710094 || 2021-08-21T06:11:05Z || Svartava || <nowiki></nowiki>
|----
| 63709986 || 2021-08-21T05:32:08Z || Svartava || <nowiki></nowiki>
|----
| 63709982 || 2021-08-21T05:31:28Z || Svartava || <nowiki>Again test</nowiki>
|----
| 63709796 || 2021-08-21T04:49:37Z || Svartava || <nowiki>Undo revision 63709792 by [[Special:Contributions/Svartava2|Svartava2]] ([[User talk:Svartava2|talk]])</nowiki>
|----
| 63709792 || 2021-08-21T04:48:36Z || Svartava || <nowiki></nowiki>
|----
| 63709788 || 2021-08-21T04:47:03Z || Svartava || <nowiki>Match with Sanskrit typing aids</nowiki>
|----
| 63709772 || 2021-08-21T04:38:59Z || Svartava || <nowiki>Created page with "local data = {} local U = mw.ustring.char local candrabindu = U(0xC81) local anusvAra = U(0xC82) local visarga = U(0xC83) local virAma = U(0xCCD) local avagraha = "ಽ" local consonants = "ಕಖಗಘಙಚಛಜಝಞಟಠಡಢಣತಥದಧನಪಫಬಭಮಯರಲವಶಷಸಹ" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["kn"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "ಐ"},..."</nowiki>
|}
19ppogodmyiatszzwiwguj2luz0wveb
Module:typing-aids/data/kn
828
125557
193517
2024-11-21T10:47:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/kn]] ([[Module talk:typing-aids/data/kn|history]])
193517
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local candrabindu = U(0xC81)
local anusvAra = U(0xC82)
local visarga = U(0xC83)
local virAma = U(0xCCD)
local avagraha = "ಽ"
local consonants = "ಕಖಗಘಙಚಛಜಝಞಟಠಡಢಣತಥದಧನಪಫಬಭಮಯರಱಲವಶಷಸಹಳೞ"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["kn"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "ಐ"},
{"au", "ಔ"},
{"ï", "ಇ"},
{"ü", "ಉ"},
{"a", "ಅ"},
{"ā", "ಆ"},
{"i", "ಇ"},
{"ī", "ಈ"},
{"u", "ಉ"},
{"ū", "ಊ"},
{"e", "ಎ"},
{"ē", "ಏ"},
{"o", "ಒ"},
{"ō", "ಓ"},
{"ṝ", "ೠ"},
{"ṛ", "ಋ"},
{"r̥", "ಋ"},
{"ḹ", "ೡ"},
{"l̥", "ಌ"},
{"(ಅ)[%-/]([ಇಉ])", "%1%2"}, -- a-i, a-u for ಅಇ, ಅಉ; must follow rules for "ai", "au"
{"(ಲ)[%-/]([ೃ ೄ])", "%1%2"}, -- l-R, l-RR for ಲೃ, ಲೄ; must follow rules for "lR", "lRR"
-- Two-letter consonants must go before h.
{"kh", "ಖ"},
{"gh", "ಘ"},
{"ch", "ಛ"},
{"jh", "ಝ"},
{"ṭh", "ಠ"},
{"ḍh", "ಢ"},
{"th", "ಥ"},
{"dh", "ಧ"},
{"ph", "ಫ"},
{"bh", "ಭ"},
{"h", "ಹ"},
-- Other stops.
{"k", "ಕ"},
{"g", "ಗ"},
{"c", "ಚ"},
{"j", "ಜ"},
{"ṭ", "ಟ"},
{"ḍ", "ಡ"},
{"t", "ತ"},
{"d", "ದ"},
{"p", "ಪ"},
{"b", "ಬ"},
-- Nasals.
{"ṅ", "ಙ"},
{"ñ", "ಞ"},
{"ṇ", "ಣ"},
{"n", "ನ"},
{"m", "ಮ"},
-- Remaining consonants.
{"y", "ಯ"},
{"r", "ರ"},
{"l", "ಲ"},
{"v", "ವ"},
{"ś", "ಶ"},
{"ṣ", "ಷ"},
{"s", "ಸ"},
{"ḷ", "ಳ"},
{"m̐", candrabindu},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["ಇ"] = U(0xCBF),
["ಉ"] = U(0xCC1),
["ಋ"] = U(0xCC3),
["ಌ"] = U(0xCE2),
["ಎ"] = U(0xCC6),
["ಏ"] = U(0xCC7),
["ಎ"] = U(0xCCA),
["ಓ"] = U(0xCCB),
["ಆ"] = U(0xCBE),
["ಈ"] = U(0xCC0),
["ಊ"] = U(0xCC2),
["ೠ"] = U(0xCC4),
["ೡ"] = U(0xCE3),
["ಐ"] = U(0xCC8),
["ಔ"] = U(0xCCC),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["kn"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["kn"], {"(" .. consonant .. ")ಅ", "%1"})
data["kn-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["E"] = "ē",
["O"] = "ō",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["L"] = "ḷ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
},
[2] = {
["lR"] = "l̥",
["lRR"] = "ḹ",
},
[3] = {
["R"] = "ṛ",
["RR"] = "ṝ",
},
}
return data
jjetjf9e7b3vs9h5tmdzheakjlv8jw2
Module talk:typing-aids/data/mai
829
125558
193518
2024-11-21T10:47:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/mai]]
193518
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 65207709 || 2022-01-05T21:09:04Z || Kutchkutch || <nowiki></nowiki>
|----
| 65207690 || 2022-01-05T21:06:03Z || Kutchkutch || <nowiki></nowiki>
|----
| 63868838 || 2021-09-07T10:36:28Z || Kutchkutch || <nowiki></nowiki>
|----
| 63868727 || 2021-09-07T10:19:24Z || Kutchkutch || <nowiki></nowiki>
|----
| 63844747 || 2021-09-05T00:04:51Z || Kutchkutch || <nowiki></nowiki>
|----
| 63813958 || 2021-09-01T17:58:25Z || Kutchkutch || <nowiki></nowiki>
|----
| 63813946 || 2021-09-01T17:56:16Z || Kutchkutch || <nowiki></nowiki>
|----
| 63786329 || 2021-08-29T11:24:59Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x114C0) local visarga = U(0x114C1) local virAma = U(0x114C2) local avagraha = "𑓄" local consonants = "𑒏𑒐𑒑𑒒𑒓𑒔𑒕𑒖𑒗𑒘𑒙𑒚𑒛𑒜𑒝𑒞𑒟𑒠𑒡𑒢𑒣𑒤𑒥𑒦𑒧𑒨𑒩𑒪𑒫𑒮𑒬𑒭𑒯" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["mai"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"..."</nowiki>
|}
q0o9z6ec91w767hdex5fw1yd62kibyz
Module:typing-aids/data/mai
828
125559
193519
2024-11-21T10:47:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/mai]] ([[Module talk:typing-aids/data/mai|history]])
193519
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0x114C0)
local visarga = U(0x114C1)
local virAma = U(0x114C2)
local nuktA = U(0x114C3)
local candrabindu = U(0x114BF)
local avagraha = "𑓄"
local consonants = "𑒏𑒐𑒑𑒒𑒓𑒔𑒕𑒖𑒗𑒘𑒙𑒚𑒛𑒜𑒝𑒞𑒟𑒠𑒡𑒢𑒣𑒤𑒥𑒦𑒧𑒨𑒩𑒪𑒫𑒮𑒬𑒭𑒯"
local consonant = "[" .. consonants .. "]" .. nuktA .. "?"
local acute = U(0x301) -- combining acute
data["mai"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑒌"},
{"au", "𑒎"},
{"ä", "𑒁"},
{"ï", "𑒃"},
{"ü", "𑒅"},
{"a", "𑒁"},
{"ā", "𑒂"},
{"i", "𑒃"},
{"ī", "𑒄"},
{"u", "𑒅"},
{"ū", "𑒆"},
{"e", U(0x114BA)},
{"ē", "𑒋"},
{"o", U(0x114BD)},
{"ō", "𑒍"},
{"ṝ", "𑒈"},
{"ṛ", "𑒇"},
{"r̥", "𑒇"},
{"ḹ", "𑒊"},
{"ḷ", "𑒉"},
{"(𑒁)[%-/]([𑒃𑒅])", "%1%2"}, -- a-i, a-u for 𑒁𑒃, 𑒁𑒅; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑒐"},
{"gh", "𑒒"},
{"ch", "𑒕"},
{"jh", "𑒗"},
{"ṭh", "𑒚"},
{"ḍh", "𑒜"},
{"ɽh", "𑒜𑓃"},
{"th", "𑒟"},
{"dh", "𑒡"},
{"ph", "𑒤"},
{"bh", "𑒦"},
{"h", "𑒯"},
-- Other stops.
{"k", "𑒏"},
{"g", "𑒑"},
{"c", "𑒔"},
{"j", "𑒖"},
{"ṭ", "𑒙"},
{"ḍ", "𑒛"},
{"ɽ", "𑒛𑓃"},
{"t", "𑒞"},
{"d", "𑒠"},
{"p", "𑒣"},
{"b", "𑒥"},
-- Nasals.
{"ṅ", "𑒓"},
{"ñ", "𑒘"},
{"ṇ", "𑒝"},
{"n", "𑒢"},
{"n", "𑒢"},
{"m", "𑒧"},
-- Remaining consonants.
{"y", "𑒨"},
{"r", "𑒩"},
{"l", "𑒪"},
{"v", "𑒫"},
{"ś", "𑒬"},
{"ṣ", "𑒭"},
{"s", "𑒮"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
{"~", candrabindu},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["𑒃"] = U(0x114B1),
["𑒅"] = U(0x114B3),
["𑒇"] = U(0x114B5),
["𑒉"] = U(0x114B7),
["𑒋"] = U(0x114B9),
["𑒍"] = U(0x114BC),
["𑒂"] = U(0x114B0),
["𑒄"] = U(0x114B2),
["𑒆"] = U(0x114B4),
["𑒈"] = U(0x114B6),
["𑒊"] = U(0x114B8),
["𑒌"] = U(0x114BB),
["𑒎"] = U(0x114BE),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["mai"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["mai"], {"(" .. consonant .. ")𑒁", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["mai-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["E"] = "ē",
["O"] = "ō",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["_rh_"] = "ɽh",
["lR"] = "ḷ",
["RR"] = "ṝ",
},
[3] = {
["_r_"] = "ɽ",
["R"] = "ṛ",
},
}
return data
0y2n9eubi1do4biqwd8crgfja5r9qmt
Module talk:typing-aids/data/mwr
829
125560
193520
2024-11-21T10:47:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/mwr]]
193520
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 78757628 || 2024-04-05T04:54:56Z || Theknightwho || <nowiki>Use faster implementation of mw.ustring.char.</nowiki>
|----
| 63868254 || 2021-09-07T08:59:13Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local nuktA = U(0x11173) local consonants = "𑅕𑅖𑅗𑅘𑅙𑅚𑅛𑅜𑅝𑅞𑅟𑅠𑅡𑅢𑅣𑅤𑅥𑅦𑅧𑅨𑅩𑅪𑅫𑅬𑅭𑅮𑅯𑅰𑅱𑅲" local consonant = "[" .. consonants .. "]" .. nuktA .. "?" local acute = U(0x301) -- combining acute data["mwr"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "𑅑"}, {"au", "𑅒"}, {"ä", "𑅐"}, {"ö", "𑅔"}, {"ï", "..."</nowiki>
|}
fuqd6zcfdzd8ffhqwyrigm6vt99ix7s
Module:typing-aids/data/mwr
828
125561
193521
2024-11-21T10:47:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/mwr]] ([[Module talk:typing-aids/data/mwr|history]])
193521
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local nuktA = U(0x11173)
local consonants = "𑅕𑅖𑅗𑅘𑅙𑅚𑅛𑅜𑅝𑅞𑅟𑅠𑅡𑅢𑅣𑅤𑅥𑅦𑅧𑅨𑅩𑅪𑅫𑅬𑅭𑅮𑅯𑅰𑅱𑅲"
local consonant = "[" .. consonants .. "]" .. nuktA .. "?"
local acute = U(0x301) -- combining acute
data["mwr"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑅑"},
{"au", "𑅒"},
{"ä", "𑅐"},
{"ö", "𑅔"},
{"ï", "𑅑"},
{"ü", "𑅒"},
{"a", "𑅐"},
{"ā", "𑅐"},
{"i", "𑅑"},
{"ī", "𑅑"},
{"u", "𑅒"},
{"ū", "𑅒"},
{"e", "𑅓"},
{"o", "𑅔"},
{"(𑅐)[%-/]([𑅑𑅒])", "%1%2"}, -- a-i, a-u for 𑅐𑅑, 𑅐𑅒; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑅖"},
{"gh", "𑅘"},
{"ch", "𑅚"},
{"jh", "𑅜"},
{"ṭh", "𑅟"},
{"ḍh", "𑅡"},
{"th", "𑅤"},
{"dh", "𑅦"},
{"ph", "𑅩"},
{"bh", "𑅫"},
{"h", "𑅱"},
-- Other stops.
{"k", "𑅕"},
{"g", "𑅗"},
{"c", "𑅙"},
{"j", "𑅛"},
{"ṭ", "𑅞"},
{"ḍ", "𑅠"},
{"ṛ", "𑅲"},
{"t", "𑅣"},
{"d", "𑅥"},
{"p", "𑅨"},
{"b", "𑅪"},
-- Nasals.
{"ñ", "𑅝"},
{"ṇ", "𑅢"},
{"n", "𑅧"},
{"n", "𑅧"},
{"m", "𑅬"},
-- Remaining consonants.
{"y", "𑅛"},
{"r", "𑅭"},
{"l", "𑅮"},
{"v", "𑅯"},
{"ś", "𑅰"},
{"s", "𑅰"},
{"ṣ", "𑅖"},
{"ṃ", "𑅧"},
}
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["mwr"], {"(" .. consonant .. ")𑅐", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["mwr-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["/"] = acute,
},
[2] = {
["R"] = "ṛ",
},
}
return data
lsmyufpoqf7x5fxzcs2jyfyito8osgl
Module talk:typing-aids/data/omr
829
125562
193522
2024-11-21T10:48:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/omr]]
193522
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 63833772 || 2021-09-03T17:11:23Z || Kutchkutch || <nowiki></nowiki>
|----
| 63760440 || 2021-08-26T13:23:22Z || Kutchkutch || <nowiki></nowiki>
|----
| 63760398 || 2021-08-26T13:15:36Z || Kutchkutch || <nowiki></nowiki>
|----
| 63760183 || 2021-08-26T12:19:09Z || Kutchkutch || <nowiki></nowiki>
|----
| 62931041 || 2021-06-21T09:01:46Z || SodhakSH || <nowiki></nowiki>
|----
| 62931036 || 2021-06-21T09:00:06Z || SodhakSH || <nowiki>???</nowiki>
|----
| 62931013 || 2021-06-21T08:56:03Z || SodhakSH || <nowiki>Undo revision 62931010 by [[Special:Contributions/SodhakSH|SodhakSH]] ([[User talk:SodhakSH|talk]])</nowiki>
|----
| 62931010 || 2021-06-21T08:55:01Z || SodhakSH || <nowiki></nowiki>
|----
| 62930987 || 2021-06-21T08:39:49Z || SodhakSH || <nowiki>SodhakSH moved page [[Module:User:SodhakSH/typing-aids/data/Modi]] to [[Module:typing-aids/data/omr]] without leaving a redirect: Time to test</nowiki>
|----
| 62930986 || 2021-06-21T08:38:58Z || SodhakSH || <nowiki></nowiki>
|----
| 62930919 || 2021-06-21T08:12:16Z || SodhakSH || <nowiki>Battery about to die... let me save this</nowiki>
|----
| 62930886 || 2021-06-21T07:59:14Z || SodhakSH || <nowiki></nowiki>
|----
| 62930873 || 2021-06-21T07:54:51Z || SodhakSH || <nowiki>Other stops & remaining consonants</nowiki>
|----
| 62930844 || 2021-06-21T07:38:30Z || SodhakSH || <nowiki>Nasals</nowiki>
|----
| 62930838 || 2021-06-21T07:36:15Z || SodhakSH || <nowiki></nowiki>
|----
| 62930601 || 2021-06-21T05:44:35Z || SodhakSH || <nowiki>SodhakSH moved page [[Module:typing-aids/data/Modi]] to [[Module:User:SodhakSH/typing-aids/data/Modi]] without leaving a redirect: I'm still working on this</nowiki>
|----
| 62930598 || 2021-06-21T05:43:10Z || SodhakSH || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x1163D) local visarga = U(0x1163E) local virAma = U(0x1163F) local avagraha = "ऽ" local consonants = "𑘎..."</nowiki>
|}
roqu4ncqs56bgcfnd44j59g1mwk07ug
Module:typing-aids/data/omr
828
125563
193523
2024-11-21T10:48:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/omr]] ([[Module talk:typing-aids/data/omr|history]])
193523
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0x1163D)
local visarga = U(0x1163E)
local virAma = U(0x1163F)
local zwj = U(0x200D)
local avagraha = "ऽ"
local consonants = "𑘎𑘏𑘐𑘑𑘒𑘓𑘔𑘕𑘖𑘗𑘘𑘙𑘚𑘛𑘜𑘝𑘞𑘟𑘠𑘡𑘢𑘣𑘤𑘥𑘦𑘧𑘨𑘩𑘪𑘯𑘫𑘬𑘭𑘮"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["omr"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑘋"},
{"au", "𑘍"},
{"ï", "𑘃"},
{"i", "𑘃"},
{"ī", "𑘃"},
{"ü", "𑘄"},
{"u", "𑘄"},
{"ū", "𑘄"},
{"a", "𑘀"},
{"ā", "𑘁"},
{"e", "𑘊"},
{"o", "𑘌"},
{"ṝ", "𑘇"},
{"ṛ", "𑘆"},
{"r̥", "𑘆"},
{"ṟ", "𑘨"..virAma.. zwj}, -- eyelash र
{"ḹ", "𑘉"},
{"l̥", "𑘈"},
{"(𑘀)[%-/]([𑘃𑘄])", "%1%2"}, -- a-i, a-u for 𑘀𑘃, 𑘀𑘄; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑘏"},
{"gh", "𑘑"},
{"ch", "𑘔"},
{"jh", "𑘖"},
{"ṭh", "𑘙"},
{"ḍh", "𑘛"},
{"th", "𑘞"},
{"dh", "𑘠"},
{"ph", "𑘣"},
{"bh", "𑘥"},
{"h", "𑘮"},
-- Other stops.
{"k", "𑘎"},
{"g", "𑘐"},
{"c", "𑘓"},
{"j", "𑘕"},
{"ṭ", "𑘘"},
{"ḍ", "𑘚"},
{"t", "𑘝"},
{"d", "𑘟"},
{"p", "𑘢"},
{"b", "𑘤"},
-- Nasals.
{"ṅ", "𑘒"},
{"ñ", "𑘗"},
{"ṇ", "𑘜"},
{"n", "𑘡"},
{"m", "𑘦"},
-- Remaining consonants.
{"y", "𑘧"},
{"r", "𑘨"},
{"l", "𑘩"},
{"v", "𑘪"},
{"ś", "𑘫"},
{"ṣ", "𑘬"},
{"s", "𑘭"},
{"ḷ", "𑘯"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["𑘁"] = U(0x11630),
["𑘂"] = U(0x11631),
["𑘃"] = U(0x11632),
["𑘄"] = U(0x11633),
["𑘅"] = U(0x11634),
["𑘆"] = U(0x11635),
["𑘇"] = U(0x11636),
["𑘈"] = U(0x11637),
["𑘉"] = U(0x11638),
["𑘊"] = U(0x11639),
["𑘋"] = U(0x1163A),
["𑘌"] = U(0x1163B),
["𑘍"] = U(0x1163C),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["omr"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["omr"], {"(" .. consonant .. ")𑘀", "%1"})
data["omr-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "u",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["LRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["LR"] = "l̥",
["RR"] = "ṝ",
["r_"] = "ṟ",
},
[3] = {
["R"] = "ṛ",
["L"] = "ḷ",
},
}
return data
qckm5lbi2adoi8qt1gsv2d2rm4ct968
Module talk:typing-aids/data/omr-Deva
829
125564
193524
2024-11-21T10:48:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/omr-Deva]]
193524
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 65208009 || 2022-01-05T22:20:22Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x902) local visarga = U(0x903) local virAma = U(0x94D) local avagraha = "ऽ" local consonants = "कखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसह" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["omr-Deva"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "ऐ"}, {"au", "औ"}, {"..."</nowiki>
|}
8tjs5zi5k0kqf1g4j0ww03ovsqj35sa
Module:typing-aids/data/omr-Deva
828
125565
193525
2024-11-21T10:48:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/omr-Deva]] ([[Module talk:typing-aids/data/omr-Deva|history]])
193525
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0x902)
local visarga = U(0x903)
local virAma = U(0x94D)
local avagraha = "ऽ"
local consonants = "कखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसह"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["omr-Deva"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "ऐ"},
{"au", "औ"},
{"ä", "अ"},
{"ö", "ओ"},
{"ï", "इ"},
{"ü", "उ"},
{"a", "अ"},
{"ā", "आ"},
{"i", "इ"},
{"ī", "ई"},
{"u", "उ"},
{"ū", "ऊ"},
{"e", "ए"},
{"o", "ओ"},
{"ṝ", "ॠ"},
{"ṛ", "ऋ"},
{"r̥", "ऋ"},
{"ḹ", "ॡ"},
{"l̥̄", "ॡ"},
{"ḷ", "ळ"},
{"(अ)[%-/]([इउ])", "%1%2"}, -- a-i, a-u for अइ, अउ; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "ख"},
{"gh", "घ"},
{"ch", "छ"},
{"jh", "झ"},
{"ṭh", "ठ"},
{"ḍh", "ढ"},
{"th", "थ"},
{"dh", "ध"},
{"ph", "फ"},
{"bh", "भ"},
{"h", "ह"},
-- Other stops.
{"k", "क"},
{"g", "ग"},
{"c", "च"},
{"j", "ज"},
{"ṭ", "ट"},
{"ḍ", "ड"},
{"t", "त"},
{"d", "द"},
{"p", "प"},
{"b", "ब"},
-- Nasals.
{"ṅ", "ङ"},
{"ñ", "ञ"},
{"ṇ", "ण"},
{"n", "न"},
{"n", "न"},
{"m", "म"},
-- Remaining consonants.
{"y", "य"},
{"r", "र"},
{"l", "ल"},
{"v", "व"},
{"ś", "श"},
{"ṣ", "ष"},
{"s", "स"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["इ"] = U(0x93F),
["उ"] = U(0x941),
["ऋ"] = U(0x943),
["ए"] = U(0x947),
["ओ"] = U(0x94B),
["आ"] = U(0x93E),
["ई"] = U(0x940),
["ऊ"] = U(0x942),
["ऐ"] = U(0x948),
["औ"] = U(0x94C),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["omr-Deva"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["omr-Deva"], {"(" .. consonant .. ")अ", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["omr-Deva-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["L"] = "ḷ",
["H"] = "ḥ",
["LRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
gk41oz0qedyajcd1o8amwu4fpciqlay
Module talk:typing-aids/data/os
829
125566
193526
2024-11-21T10:48:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/os]]
193526
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 58003252 || 2019-11-14T22:56:27Z || Victar || <nowiki></nowiki>
|----
| 58003218 || 2019-11-14T22:37:06Z || Victar || <nowiki></nowiki>
|----
| 58003215 || 2019-11-14T22:36:13Z || Victar || <nowiki></nowiki>
|----
| 58003196 || 2019-11-14T22:32:01Z || Victar || <nowiki></nowiki>
|----
| 58003170 || 2019-11-14T22:19:11Z || Victar || <nowiki></nowiki>
|----
| 58003167 || 2019-11-14T22:16:53Z || Victar || <nowiki></nowiki>
|----
| 58003164 || 2019-11-14T22:13:23Z || Victar || <nowiki>for the lazy</nowiki>
|----
| 58003161 || 2019-11-14T22:12:13Z || Victar || <nowiki>i think that's all i really need</nowiki>
|----
| 58003100 || 2019-11-14T21:44:47Z || Victar || <nowiki>Created page with "local U = mw.ustring.char local acute = U(0x301) local caron = U(0x30C) local diaeresis = U(0x308) local grave = U(0x300) local data = { { ["ʷ"] = "°", ["g°y"] = "г..."</nowiki>
|}
r8eo5gf6xz5vd8le989xw5yugvfolwp
Module:typing-aids/data/os
828
125567
193527
2024-11-21T10:48:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/os]] ([[Module talk:typing-aids/data/os|history]])
193527
Scribunto
text/plain
local U = mw.ustring.char
local acute = U(0x301)
local caron = U(0x30C)
local diaeresis = U(0x308)
local grave = U(0x300)
local data = {
{
["ë"] = "ё", ["e" .. diaeresis] = "ё", ["Ë"] = "Ё", ["E" .. diaeresis] = "Ё",
["ž"] = "ж", ["z" .. caron] = "ж", ["Z" .. caron] = "Ж", ["Ž"] = "Ж",
["šč"] = "щ", ["s" .. caron .. "c" .. caron] = "щ", ["ŠČ"] = "Щ", ["S" .. caron .. "C" .. caron] = "Щ",
["š"] = "ш", ["s" .. caron] = "ш", ["Š"] = "Ш", ["S" .. caron] = "ш",
["ʺ"] = "ъ",
["ʹ"] = "ь",
["è"] = "э", ["e" .. grave] = "э", ["È"] = "Э", ["E" .. grave] = "Э",
["ju"] = "ю", ["Ju"] = "Ю",
["ja"] = "я", ["Ja"] = "Я"
},
{
["æ"] = "ӕ", ["ä"] = "ӕ", ["a" .. diaeresis] = "ӕ", ["Æ"] = "Ӕ", ["Ä"] = "Ӕ", ["A" .. diaeresis] = "Ӕ",
["ǧ"] = "гъ", ["g" .. caron] = "гъ", ["Ǧ"] = "Гъ", ["G" .. caron] = "Гъ",
["ǵ"] = "дж", ["g" .. acute] = "дж", ["Ǵ"] = "Дж", ["G" .. acute] = "Дж",
["ḱ"] = "ч", ["k" .. acute] = "ч", ["Ḱ"] = "Ч", ["K" .. acute] = "Ч",
},
{
["°"] = "у", ["o^"] = "у",
["ʷ"] = "у", ["w^"] = "У"
},
{
["a"] = "а", ["A"] = "А",
["b"] = "б", ["B"] = "Б",
["v"] = "в", ["V"] = "В",
["g"] = "г", ["G"] = "Г",
["ʒ"] = "дз", ["Ʒ"] = "Дз",
["d"] = "д", ["D"] = "Д",
["e"] = "е", ["E"] = "Е",
["z"] = "з", ["Z"] = "З",
["i"] = "и", ["I"] = "И",
["j"] = "й", ["J"] = "Й",
["k"] = "к", ["K"] = "К",
["l"] = "л", ["L"] = "Л",
["m"] = "м", ["M"] = "М",
["n"] = "н", ["N"] = "Н",
["o"] = "о", ["O"] = "О",
["p"] = "п", ["P"] = "П",
["r"] = "р", ["R"] = "Р",
["s"] = "с", ["S"] = "С",
["t"] = "т", ["T"] = "Т",
["u"] = "у", ["w"] = "у", ["U"] = "У", ["W"] = "У",
["f"] = "ф", ["F"] = "Ф",
["x"] = "х", ["X"] = "Х",
["q"] = "хъ", ["Q"] = "Хъ",
["c"] = "ц", ["C"] = "Ц",
["y"] = "ы", ["Y"] = "Ы",
["’"] = "ъ", ["'"] = "ъ"
}
}
return data
aszqtszlll3iv4agwsc6hv65vq20ev3
Module talk:typing-aids/data/oty
829
125568
193528
2024-11-21T10:49:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/oty]]
193528
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 53591526 || 2019-07-09T00:58:08Z || 108.31.52.77 || <nowiki></nowiki>
|----
| 53591521 || 2019-07-09T00:53:48Z || 108.31.52.77 || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x11001) local visarga = U(0x11002) local virAma = U(0x11046) local consonants = "𑀓𑀔𑀕𑀖𑀗𑀘𑀙..."</nowiki>
|}
b2anpaeed9c64x82abzq0byxp79cuac
Module:typing-aids/data/oty
828
125569
193529
2024-11-21T10:49:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/oty]] ([[Module talk:typing-aids/data/oty|history]])
193529
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0x11001)
local visarga = U(0x11002)
local virAma = U(0x11046)
local consonants = "𑀓𑀔𑀕𑀖𑀗𑀘𑀙𑀚𑀛𑀜𑀝𑀞𑀟𑀠𑀡𑀢𑀣𑀤𑀥𑀦𑀧𑀨𑀩𑀪𑀫𑀬𑀭𑀮𑀯𑀰𑀱𑀲𑀳𑀴𑀵𑀶𑀷"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["oty"] = {
[1] = {
["ai"] = "𑀐",
["au"] = "𑀒",
},
[2] = {
["ṃ"] = anusvAra,
["ḥ"] = visarga,
["kh"] = "𑀔",
["gh"] = "𑀖",
["ṅ"] = "𑀗",
["ch"] = "𑀙",
["jh"] = "𑀛",
["ñ"] = "𑀜",
["ṭh"] = "𑀞",
["ḍh"] = "𑀠",
["ṇ"] = "𑀡",
["th"] = "𑀣",
["dh"] = "𑀥",
["n"] = "𑀦",
["ph"] = "𑀨",
["bh"] = "𑀪",
["m"] = "𑀫",
["y"] = "𑀬",
["r"] = "𑀭",
["l"] = "𑀮",
["v"] = "𑀯",
["ś"] = "𑀰",
["ṣ"] = "𑀱",
["s"] = "𑀲",
["ḷ"] = "𑀴",
["ḻ"] = "𑀵",
["ṟ"] = "𑀶",
["ṉ"] = "𑀷",
},
[3] = {
["a"] = "𑀅",
["ā"] = "𑀆",
["i"] = "𑀇",
["ī"] = "𑀈",
["u"] = "𑀉",
["ū"] = "𑀊",
["e"] = "𑀏",
["o"] = "𑀑",
["k"] = "𑀓",
["g"] = "𑀕",
["c"] = "𑀘",
["j"] = "𑀚",
["ṭ"] = "𑀝",
["ḍ"] = "𑀟",
["t"] = "𑀢",
["d"] = "𑀤",
["n"] = "𑀦",
["p"] = "𑀧",
["b"] = "𑀩",
["h"] = "𑀳",
},
[4] = {
["ï"] = "i",
["ü"] = "u",
["[%-/]"] = "", -- a-i, a-u for अइ, अउ
["(" .. consonant .. ")" .. "(" .. consonant .. ")"] = "%1" .. virAma .. "%2",
["(" .. consonant .. ")$"] = "%1" .. virAma,
[acute] = "",
},
[5] = { -- this rule must be applied twice because a consonant may only be in one caoture per operation, so "CCC" will only recognize the first two consonants
["(" .. consonant .. ")" .. "(" .. consonant .. ")"] = "%1" .. virAma .. "%2",
["i"] = "𑀇",
["u"] = "𑀉",
},
[6] = { -- This table is filled below
},
}
local vowels = {
["𑀅"] = "",
["𑀇"] = U(0x1103A),
["𑀉"] = U(0x1103C),
["𑀋"] = U(0x1103E),
["𑀍"] = U(0x11040),
["𑀏"] = U(0x11042),
["𑀑"] = U(0x11044),
["𑀆"] = U(0x11038),
["𑀈"] = U(0x1103B),
["𑀊"] = U(0x1103D),
["𑀌"] = U(0x1103F),
["𑀎"] = U(0x11041),
["𑀐"] = U(0x11043),
["𑀒"] = U(0x11045),
}
for independentForm, diacriticalForm in pairs(vowels) do
data["oty"][6]["(" .. consonant .. ")" .. independentForm] = "%1" .. diacriticalForm
end
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["oty-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["/"] = acute,
},
[2] = {
["L"] = "ḷ",
["R"] = "ṟ",
["LL"] = "ḻ",
["NN"] = "ṉ",
},
}
return data
fmc3cykjkmmtcjv7bnwjtr1qwn7twxl
ප්රවර්ගය:Japanese kanji read as ゐ
14
125570
193531
2024-11-21T10:49:30Z
Pinthura
2424
Pinthura විසින් [[ප්රවර්ගය:Japanese kanji read as ゐ]] සිට [[ප්රවර්ගය:ජපන් කන්ජි, ゐ ලෙස කියවන]] වෙත පිටුව ගෙන යන ලදී: සේවා: නව ප්රවර්ග නාමය වෙත ගෙනයාම.
193531
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ජපන් කන්ජි, ゐ ලෙස කියවන]]
ghlie7evd8etr1fw3hklg3m78icfg9k
193535
193531
2024-11-21T10:50:10Z
Pinthura
2424
සේවා: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම.
193535
wikitext
text/x-wiki
{{category redirect|ජපන් කන්ජි, ゐ ලෙස කියවන}}
809ey1uqiv94s1y792ixkuurtmtzt6u
Module talk:typing-aids/data/oui
829
125571
193532
2024-11-21T10:49:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/oui]]
193532
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 75267576 || 2023-07-17T13:06:20Z || Yorınçga573 || <nowiki></nowiki>
|----
| 75267574 || 2023-07-17T13:06:08Z || Yorınçga573 || <nowiki></nowiki>
|----
| 75267541 || 2023-07-17T13:00:14Z || Yorınçga573 || <nowiki></nowiki>
|----
| 75267521 || 2023-07-17T12:56:09Z || Yorınçga573 || <nowiki></nowiki>
|----
| 75267500 || 2023-07-17T12:52:57Z || Yorınçga573 || <nowiki>Created page with "local data = {} local U = mw.ustring.char data = { { ["ʾ"] = "𐽰", -- aleph ["β"] = "𐽱", -- beth ["q"] = "𐽲", -- gimel-heth ["w"] = "𐽳", -- waw ["z"] = "𐽴", -- zayin ["x"] = "𐽵", -- final-heth ["y"] = "𐽶", -- yodh ["k"] = "𐽷", -- kaph ["d"] = "𐽸", -- lamedh ["m"] = "𐽹", -- mem ["n"] = "𐽺", -- nun ["s"] = "𐽻", -- samekh ["p"] = "𐽼", -- pe ["č"] = "𐽽", -- sadhe ["r"] = "𐽾", -- resh ["š"] = "..."</nowiki>
|}
7ya1poi1y9treuh8c38bu0s795rhou4
Module:typing-aids/data/oui
828
125572
193534
2024-11-21T10:49:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/oui]] ([[Module talk:typing-aids/data/oui|history]])
193534
Scribunto
text/plain
local U = mw.ustring.char
local data = {
["ʾ"] = "𐽰", -- aleph
["β"] = "𐽱", -- beth
["q"] = "𐽲", -- gimel-heth
["w"] = "𐽳", -- waw
["z"] = "𐽴", -- zayin
["x"] = "𐽵", -- final-heth
["y"] = "𐽶", -- yodh
["k"] = "𐽷", -- kaph
["d"] = "𐽸", -- lamedh
["m"] = "𐽹", -- mem
["n"] = "𐽺", -- nun
["s"] = "𐽻", -- samekh
["p"] = "𐽼", -- pe
["č"] = "𐽽", -- sadhe
["r"] = "𐽾", -- resh
["š"] = "𐽿", -- shin
["t"] = "𐾀", -- taw
["l"] = "𐾁", -- lesh
}
return data
6xia79mhqm1upqqpsv61t7pxwifpbj0
Module talk:typing-aids/data/peo
829
125573
193536
2024-11-21T10:50:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/peo]]
193536
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 78757559 || 2024-04-05T04:35:30Z || Theknightwho || <nowiki></nowiki>
|----
| 42808349 || 2017-05-07T00:19:28Z || Erutuon || <nowiki>vowelless consonants</nowiki>
|----
| 42808291 || 2017-05-07T00:12:13Z || Erutuon || <nowiki>basic syllabic replacements for Old Persian</nowiki>
|}
js3qjlbfsljfxwybbw8sl5o11xk8ccn
ප්රවර්ගය:Japanese කන්ජි, ゐ ලෙස කියවන
14
125574
193537
2024-11-21T10:50:30Z
Pinthura
2424
සේවා: මෘදු ප්රවර්ග යළියොමුවක් නිර්මාණය.
193537
wikitext
text/x-wiki
{{category redirect|ජපන් කන්ජි, ゐ ලෙස කියවන}}
809ey1uqiv94s1y792ixkuurtmtzt6u
Module:typing-aids/data/peo
828
125575
193538
2024-11-21T10:50:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/peo]] ([[Module talk:typing-aids/data/peo|history]])
193538
Scribunto
text/plain
return {
[1] = {
["ka"] = "𐎣",
["ku"] = "𐎤",
["xa"] = "𐎧",
["xi"] = "𐎧",
["xu"] = "𐎧",
["ga"] = "𐎥",
["gu"] = "𐎦",
["ca"] = "𐎨",
["ci"] = "𐎨",
["cu"] = "𐎨",
["ça"] = "𐏂",
["çi"] = "𐏂",
["çu"] = "𐏂",
["ja"] = "𐎩",
["ji"] = "𐎪",
["ta"] = "𐎫",
["ti"] = "𐎫",
["tu"] = "𐎬",
["θa"] = "𐎰",
["θi"] = "𐎰",
["θu"] = "𐎰",
["da"] = "𐎭",
["di"] = "𐎮",
["du"] = "𐎯",
["pa"] = "𐎱",
["pi"] = "𐎱",
["pu"] = "𐎱",
["fa"] = "𐎳",
["fi"] = "𐎳",
["fu"] = "𐎳",
["ba"] = "𐎲",
["bi"] = "𐎲",
["bu"] = "𐎲",
["na"] = "𐎴",
["ni"] = "𐎴",
["nu"] = "𐎵",
["ma"] = "𐎶",
["mi"] = "𐎷",
["mu"] = "𐎸",
["ya"] = "𐎹",
["yi"] = "𐎹",
["yu"] = "𐎹",
["va"] = "𐎺",
["vi"] = "𐎻",
["ra"] = "𐎼",
["ri"] = "𐎼",
["ru"] = "𐎽",
["la"] = "𐎾",
["li"] = "𐎾",
["lu"] = "𐎾",
["sa"] = "𐎿",
["si"] = "𐎿",
["su"] = "𐎿",
["za"] = "𐏀",
["zi"] = "𐏀",
["zu"] = "𐏀",
["ša"] = "𐏁",
["ši"] = "𐏁",
["šu"] = "𐏁",
["ha"] = "𐏃",
["hi"] = "𐏃",
["hu"] = "𐏃",
},
[2] = {
["a"] = "𐎠",
["i"] = "𐎡",
["u"] = "𐎢",
},
[3] = {
["k"] = "𐎣",
["x"] = "𐎧",
["g"] = "𐎥",
["c"] = "𐎨",
["ç"] = "𐏂",
["j"] = "𐎩",
["t"] = "𐎫",
["θ"] = "𐎰",
["d"] = "𐎭",
["p"] = "𐎱",
["f"] = "𐎳",
["b"] = "𐎲",
["n"] = "𐎴",
["m"] = "𐎶",
["y"] = "𐎹",
["v"] = "𐎺",
["r"] = "𐎼",
["l"] = "𐎾",
["s"] = "𐎿",
["z"] = "𐏀",
["š"] = "𐏁",
["h"] = "𐏃",
},
--[[
[3] = {
[""] = "",
[""] = "",
},
]]
}
t6780txh5kvi2t4hk5pjpkg2s6g81e3
Module talk:typing-aids/data/pra
829
125576
193539
2024-11-21T10:50:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/pra]]
193539
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 81446685 || 2024-08-31T14:34:44Z || Svartava || <nowiki>Svartava moved page [[Module:typing-aids/data/inc-pra]] to [[Module:typing-aids/data/pra]] without leaving a redirect</nowiki>
|----
| 81446681 || 2024-08-31T14:34:09Z || Svartava2 || <nowiki>/* top */ clean up, replaced: inc-pra → pra (4) using [[Project:AWB|AWB]]</nowiki>
|----
| 79050802 || 2024-04-27T09:40:38Z || SurjectionBot || <nowiki>Protected "[[Module:typing-aids/data/inc-pra]]": (bot) automatically protect highly visible templates/modules (reference score: 1993+ >= 1000) ([Edit=Allow only autoconfirmed users] (indefinite) [Move=Allow only autoconfirmed users] (indefinite))</nowiki>
|----
| 78757091 || 2024-04-05T03:33:26Z || Theknightwho || <nowiki>Use faster implementation of mw.ustring.char.</nowiki>
|----
| 76151176 || 2023-09-17T15:20:09Z || RichardW57 || <nowiki>Gathered replacements to use tables as replacement values.</nowiki>
|----
| 67304188 || 2022-06-06T19:54:46Z || RichardW57 || <nowiki>For Brahmi, <ḷ> is a letter rather than a vowel, especially for Prakrit.</nowiki>
|----
| 67298220 || 2022-06-06T12:16:09Z || RichardW57m || <nowiki>Clean out remnants of Tamil Brahmi short e and o.</nowiki>
|----
| 67295707 || 2022-06-06T11:34:23Z || RichardW57m || <nowiki>Removed length distinction on e and o - that is a feature of Tamil Brahmi.</nowiki>
|----
| 64956945 || 2021-12-14T08:33:30Z || Kutchkutch || <nowiki></nowiki>
|----
| 64955685 || 2021-12-14T03:47:15Z || Kutchkutch || <nowiki></nowiki>
|----
| 64955216 || 2021-12-14T02:32:03Z || Kutchkutch || <nowiki></nowiki>
|----
| 64955118 || 2021-12-14T02:20:50Z || Kutchkutch || <nowiki></nowiki>
|----
| 62945188 || 2021-06-24T07:58:27Z || SodhakSH || <nowiki>Works now!</nowiki>
|----
| 62945129 || 2021-06-24T07:38:59Z || SodhakSH || <nowiki>Undo revision 62945119 by [[Special:Contributions/SodhakSH|SodhakSH]] ([[User talk:SodhakSH|talk]]) was just a test</nowiki>
|----
| 62945119 || 2021-06-24T07:38:08Z || SodhakSH || <nowiki>Reverted edits by [[Special:Contributions/<bdi>Erutuon</bdi>|<bdi>Erutuon</bdi>]]; Restore to version 62453051 by [[Special:Contributions/<bdi>Erutuon</bdi>|<bdi>Erutuon</bdi>]]</nowiki>
|----
| 62690309 || 2021-06-07T08:49:46Z || Erutuon || <nowiki>convert to new format, like [[Module:typing-aids/data/sa]]</nowiki>
|----
| 62690275 || 2021-06-07T08:37:59Z || SodhakSH || <nowiki></nowiki>
|----
| 62690262 || 2021-06-07T08:33:58Z || SodhakSH || <nowiki></nowiki>
|----
| 62578277 || 2021-05-22T12:16:17Z || SodhakSH || <nowiki></nowiki>
|----
| 62547250 || 2021-05-18T14:36:50Z || SodhakSH || <nowiki>Undo revision 62547217 by [[Special:Contributions/SodhakSH|SodhakSH]] ([[User talk:SodhakSH|talk]])</nowiki>
|----
| 62547217 || 2021-05-18T14:26:35Z || SodhakSH || <nowiki>Updated @[[User:Kutchkutch]] hope there are no errors; though I'm no module-expert</nowiki>
|----
| 62453051 || 2021-04-30T07:44:14Z || Erutuon || <nowiki>raï → 𑀭𑀇 and aü → 𑀅𑀉 hopefully, as in [[Special:Diff/62452945|Module:typing-aids/data/sa]]</nowiki>
|----
| 62428048 || 2021-04-26T06:21:43Z || SodhakSH || <nowiki>Inc</nowiki>
|----
| 62428043 || 2021-04-26T06:17:13Z || SodhakSH || <nowiki>SodhakSH moved page [[Module:typing-aids/data/pra]] to [[Module:typing-aids/data/inc-pra]] without leaving a redirect</nowiki>
|----
| 62428023 || 2021-04-26T06:09:14Z || SodhakSH || <nowiki>SodhakSH moved page [[Module:typing-aids/data/inc-pra]] to [[Module:typing-aids/data/pra]] without leaving a redirect</nowiki>
|----
| 62428021 || 2021-04-26T06:08:51Z || SodhakSH || <nowiki></nowiki>
|----
| 62428011 || 2021-04-26T06:01:28Z || SodhakSH || <nowiki></nowiki>
|----
| 62427991 || 2021-04-26T05:51:25Z || SodhakSH || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x11001) local visarga = U(0x11002) local virAma = U(0x11046) local consonants = "𑀓𑀔𑀕𑀖𑀗𑀘𑀙..."</nowiki>
|}
smd0tcosela5ml57oz6ow8x3vdokq6b
Module:typing-aids/data/pra
828
125577
193540
2024-11-21T10:51:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/pra]] ([[Module talk:typing-aids/data/pra|history]])
193540
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local anusvAra = U(0x11001)
local visarga = U(0x11002)
local virAma = U(0x11046)
local consonants = "𑀓𑀔𑀕𑀖𑀗𑀘𑀙𑀚𑀛𑀜𑀝𑀞𑀟𑀠𑀡𑀢𑀣𑀤𑀥𑀦𑀧𑀨𑀩𑀪𑀫𑀬𑀭𑀮𑀯𑀰𑀱𑀲𑀳𑀴𑀵𑀶𑀷"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["pra"] = {
-- Priority digraphs
{".[iuïüö]", {["ai"] = "𑀐", ["au"] = "𑀒", ["aï"] = "𑀅𑀇", ["aü"] = "𑀅𑀉",
["aö"] = "𑀅𑀑",}},
-- Digraphs with 'h'
{".h", {["kh"] = "𑀔", ["gh"] = "𑀖", ["ch"] = "𑀙", ["jh"] = "𑀛",
["ṭh"] = "𑀞", ["ḍh"] = "𑀠", ["th"] = "𑀣", ["dh"] = "𑀥",
["ph"] = "𑀨", ["bh"] = "𑀪", }},
{"ḹ", "𑀎"},
{"l̥̄", "𑀎"},
{"l̥", "𑀍"},
-- Single letters
{".", {["ṃ"] = anusvAra, ["ḥ"] = visarga,
["ṅ"] = "𑀗", ["ñ"] = "𑀜", ["ṇ"] = "𑀡", n = "𑀦",
m = "𑀫", y = "𑀬", r = "𑀭", l = "𑀮",
v = "𑀯", ["ś"] = "𑀰", ["ṣ"] = "𑀱", s = "𑀲",
a = "𑀅", ["ā"] = "𑀆", i = "𑀇", ["ī"] = "𑀈",
u = "𑀉", ["ū"] = "𑀊", e = "𑀏", o = "𑀑",
["ṝ"] = "𑀌", ["ḷ"] = "𑀴",
-- {"ḷ", "𑀍"}, -- Only Sanskrit uses this as a vowel.
k = "𑀓", g = "𑀕", c = "𑀘", j = "𑀚",
["ṭ"] = "𑀝", ["ḍ"] = "𑀟", t = "𑀢", d = "𑀤",
p = "𑀧", b = "𑀩", h = "𑀳", ['̈'] = "",
["ṛ"] = "𑀋",}},
{"(𑀅)[%-/]([𑀇𑀉])", "%1%2"}, -- a-i, a-u for 𑀅𑀇, 𑀅𑀉; must follow rules for "ai", "au"
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
-- this rule must be applied twice because a consonant may only be in one capture per operation, so "CCC" will only recognize the first two consonants
{"(" .. consonant .. ")" .. "(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")" .. "(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"i", "𑀇"},
{"u", "𑀉"},
}
local vowels = {
["𑀇"] = U(0x1103A),
["𑀉"] = U(0x1103C),
["𑀋"] = U(0x1103E),
["𑀍"] = U(0x11040),
["𑀏"] = U(0x11042),
["𑀑"] = U(0x11044),
["𑀆"] = U(0x11038),
["𑀈"] = U(0x1103B),
["𑀊"] = U(0x1103D),
["𑀌"] = U(0x1103F),
["𑀎"] = U(0x11041),
["𑀐"] = U(0x11043),
["𑀒"] = U(0x11045),
}
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["pra"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["pra"], {"(" .. consonant .. ")𑀅", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["pra-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["ĕ"] = "e", -- These two short vowels are transcriptional additions,
["ŏ"] = "o", -- used in Pischel's transcription of Prakrit.
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["ẏ"] = "y",
["lRR"] = "l̥̄",
["/"] = acute,
},
[2] = {
["lR"] = "l̥",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
m2gtmz8o859d2cynh1dueywso7yl42e
Module talk:typing-aids/data/pra-Deva
829
125578
193541
2024-11-21T10:51:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/pra-Deva]]
193541
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 81446813 || 2024-08-31T14:55:23Z || Svartava2 || <nowiki>/* top */ replaced: inc-pra → pra (4) using [[Project:AWB|AWB]]</nowiki>
|----
| 81446749 || 2024-08-31T14:47:28Z || Svartava || <nowiki>Svartava moved page [[Module:typing-aids/data/inc-pra-Deva]] to [[Module:typing-aids/data/pra-Deva]] without leaving a redirect</nowiki>
|----
| 78757647 || 2024-04-05T04:58:17Z || Theknightwho || <nowiki>Use faster implementation of mw.ustring.char.</nowiki>
|----
| 65136480 || 2022-01-02T03:42:39Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x902) local visarga = U(0x903) local virAma = U(0x94D) local avagraha = "ऽ" local consonants = "कखगघङचछजझञटठडढणतथदधनपफबभमयरलवशषसह" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["inc-pra-Deva"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "ऐ"}, {"au", "औ"}, {..."</nowiki>
|}
4429ybxinivmmrqe2c5cxizzwn1uydd
Module:typing-aids/data/pra-Deva
828
125579
193542
2024-11-21T10:51:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/pra-Deva]] ([[Module talk:typing-aids/data/pra-Deva|history]])
193542
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local anusvAra = U(0x902)
local visarga = U(0x903)
local virAma = U(0x94D)
local avagraha = "ऽ"
local consonants = "कखगघङचछजझञटठडढणतथदधनपफबभमयरलवशषसह"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["pra-Deva"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "ऐ"},
{"au", "औ"},
{"ä", "अ"},
{"ö", "ओ"},
{"ï", "इ"},
{"ü", "उ"},
{"a", "अ"},
{"ā", "आ"},
{"i", "इ"},
{"ī", "ई"},
{"u", "उ"},
{"ū", "ऊ"},
{"e", "ए"},
{"ĕ", "ऎ"},
{"o", "ओ"},
{"ŏ", "ऒ"},
{"ṝ", "ॠ"},
{"ṛ", "ऋ"},
{"r̥", "ऋ"},
{"ḹ", "ॡ"},
{"ḷ", "ऌ"},
{"(अ)[%-/]([इउ])", "%1%2"}, -- a-i, a-u for अइ, अउ; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "ख"},
{"gh", "घ"},
{"ch", "छ"},
{"jh", "झ"},
{"ṭh", "ठ"},
{"ḍh", "ढ"},
{"th", "थ"},
{"dh", "ध"},
{"ph", "फ"},
{"bh", "भ"},
{"h", "ह"},
-- Other stops.
{"k", "क"},
{"g", "ग"},
{"c", "च"},
{"j", "ज"},
{"ṭ", "ट"},
{"ḍ", "ड"},
{"t", "त"},
{"d", "द"},
{"p", "प"},
{"b", "ब"},
-- Nasals.
{"ṅ", "ङ"},
{"ñ", "ञ"},
{"ṇ", "ण"},
{"n", "न"},
{"n", "न"},
{"m", "म"},
-- Remaining consonants.
{"y", "य"},
{"r", "र"},
{"l", "ल"},
{"v", "व"},
{"ś", "श"},
{"ṣ", "ष"},
{"s", "स"},
{"ẏ", "य़"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["इ"] = U(0x93F),
["उ"] = U(0x941),
["ऋ"] = U(0x943),
["ऌ"] = U(0x962),
["ए"] = U(0x947),
["ऎ"] = U(0x946),
["ओ"] = U(0x94B),
["ऒ"] = U(0x94A),
["आ"] = U(0x93E),
["ई"] = U(0x940),
["ऊ"] = U(0x942),
["ॠ"] = U(0x944),
["ॡ"] = U(0x963),
["ऐ"] = U(0x948),
["औ"] = U(0x94C),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["pra-Deva"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["pra-Deva"], {"(" .. consonant .. ")अ", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["pra-Deva-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["lR"] = "ḷ",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
5tgsojjtncd9qk3an90ob5rbdafkn6s
Module talk:typing-aids/data/pra-Knda
829
125580
193543
2024-11-21T10:52:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/pra-Knda]]
193543
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 81446815 || 2024-08-31T14:55:25Z || Svartava2 || <nowiki>/* top */ replaced: inc-pra → pra (4) using [[Project:AWB|AWB]]</nowiki>
|----
| 81446751 || 2024-08-31T14:47:38Z || Svartava || <nowiki>Svartava moved page [[Module:typing-aids/data/inc-pra-Knda]] to [[Module:typing-aids/data/pra-Knda]] without leaving a redirect</nowiki>
|----
| 78757646 || 2024-04-05T04:58:04Z || Theknightwho || <nowiki>Use faster implementation of mw.ustring.char.</nowiki>
|----
| 76151757 || 2023-09-17T16:56:00Z || RichardW57 || <nowiki>Gather replacements together to use tables as replacement values. With similar changes to sa and inc-pra, pushed time for [[Module:pra-decl/noun/testcases/documentation]] down from over 10.00s to a median 7.65s.</nowiki>
|----
| 67304284 || 2022-06-06T20:09:35Z || RichardW57 || <nowiki><ḷ>is a consonant for transliteration for the Kannada script.</nowiki>
|----
| 65136376 || 2022-01-02T03:27:33Z || Kutchkutch || <nowiki></nowiki>
|----
| 64955550 || 2021-12-14T03:09:12Z || Kutchkutch || <nowiki></nowiki>
|----
| 64955543 || 2021-12-14T03:07:51Z || Kutchkutch || <nowiki></nowiki>
|----
| 64811786 || 2021-11-30T03:02:05Z || Kutchkutch || <nowiki></nowiki>
|----
| 64811717 || 2021-11-30T02:48:25Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local candrabindu = U(0xC81) local anusvAra = U(0xC82) local visarga = U(0xC83) local virAma = U(0xCCD) local avagraha = "ಽ" local consonants = "ಕಖಗಘಙಚಛಜಝಞಟಠಡಢಣತಥದಧನಪಫಬಭಮಯರಱಲವಶಷಸಹಳೞ" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["inc-pra-Knda"] = { {"ai", "ಐ"}, {"au", "ಔ"}, {"aï", "ಅಇ"}, {"aü",..."</nowiki>
|}
i5mhe3rqp0itsstqu7sob0nal0mqmns
Module:typing-aids/data/pra-Knda
828
125581
193544
2024-11-21T10:52:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/pra-Knda]] ([[Module talk:typing-aids/data/pra-Knda|history]])
193544
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local candrabindu = U(0xC81)
local anusvAra = U(0xC82)
local visarga = U(0xC83)
local virAma = U(0xCCD)
local nuktA = U(0xCBC)
local avagraha = "ಽ"
local consonants = "ಕಖಗಘಙಚಛಜಝಞಟಠಡಢಣತಥದಧನಪಫಬಭಮಯರಱಲವಶಷಸಹಳೞ"
local consonant = "[" .. consonants .. "]" .. nuktA .. "?"
local acute = U(0x301) -- combining acute
data["pra-Knda"] = {
-- Two-element vowels or vowel seqeunces.
{"a[iuïüö]", {["ai"] = "ಐ", ["au"] = "ಔ", ["aï"] = "ಅಇ", ["aü"] = "ಅಉ",
["aö"] = "ಅಓ"}},
{"l̥̄", "ೡ"},
{"l̥", "ಌ"},
-- Digraphs with 'h':
{".h", {["kh"] = "ಖ", ["gh"] = "ಘ", ["ch"] = "ಛ", ["jh"] = "ಝ",
["ṭh"] = "ಠ", ["ḍh"] = "ಢ", ["th"] = "ಥ", ["dh"] = "ಧ",
["ph"] = "ಫ", ["bh"] = "ಭ",}},
-- Non-ASCII single characters
{".", {["ḹ"] = "ೡ", ["ṃ"] = anusvAra, ["ḥ"] = visarga,
["ṅ"] = "ಙ", ["ñ"] = "ಞ", ["ṇ"] = "ಣ", ["ś"] = "ಶ",
["ṣ"] = "ಷ", ["ā"] = "ಆ", ["ī"] = "ಈ", ["ū"] = "ಊ",
["ĕ"] = "ಎ", ["ŏ"] = "ಒ", ["ṝ"] = "ೠ", ["ḷ"] = "ಳ",
["ṭ"] = "ಟ", ["ḍ"] = "ಡ", ['̈'] = "", ["ṛ"] = "ಋ",}},
-- ASCII letters
{"[a-z]", {n = "ನ", m = "ಮ", y = "ಯ", r = "ರ",
l = "ಲ", v = "ವ", s = "ಸ", a = "ಅ",
i = "ಇ", u = "ಉ", e = "ಏ", o = "ಓ",
k = "ಕ", g = "ಗ", c = "ಚ", j = "ಜ",
t = "ತ", d = "ದ", p = "ಪ", b = "ಬ",
h = "ಹ"}},
{"(ಅ)[%-/]([ಇಉ])", "%1%2"}, -- a-i, a-u for 𑀅𑀇, 𑀅𑀉; must follow rules for "ai", "au"
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
-- this rule must be applied twice because a consonant may only be in one capture per operation, so "CCC" will only recognize the first two consonants
{"(" .. consonant .. ")" .. "(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")" .. "(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"i", "ಇ"},
{"u", "ಉ"},
}
local vowels = {
["ಇ"] = U(0xCBF),
["ಉ"] = U(0xCC1),
["ಋ"] = U(0xCC3),
["ಌ"] = U(0xCE2),
["ಎ"] = U(0xCC6),
["ಏ"] = U(0xCC7),
["ಒ"] = U(0xCCA),
["ಓ"] = U(0xCCB),
["ಆ"] = U(0xCBE),
["ಈ"] = U(0xCC0),
["ಊ"] = U(0xCC2),
["ೠ"] = U(0xCC4),
["ೡ"] = U(0xCE3),
["ಐ"] = U(0xCC8),
["ಔ"] = U(0xCCC),
}
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["pra-Knda"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["pra-Knda"], {"(" .. consonant .. ")ಅ", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["pra-Knda-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["lR"] = "ḷ",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
qjuhu36fy2qbk5ozg9onc6exk3sdmjo
Module talk:typing-aids/data/sa-Gujr
829
125582
193545
2024-11-21T10:52:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Gujr]]
193545
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 81491926 || 2024-09-05T14:41:01Z || Svartava || <nowiki></nowiki>
|----
| 81491910 || 2024-09-05T14:37:42Z || Svartava || <nowiki></nowiki>
|----
| 81491907 || 2024-09-05T14:37:18Z || Svartava || <nowiki>Created page with "local data = {} local U = require("Module:string/char") local anusvAra = U(0x0A82) local visarga = U(0x0A83) local virAma = U(0x0ACD) local avagraha = "𑇁" local consonants = "કખગઘઙચછજઝઞટઠડઢણતથદધનપફબભમયરલવળશષસહ" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["sa-Gujr"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "ઐ"}, {..."</nowiki>
|}
0hesknvcc6f3wn986zw3c22bbgoav7i
Module:typing-aids/data/sa-Gujr
828
125583
193546
2024-11-21T10:53:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Gujr]] ([[Module talk:typing-aids/data/sa-Gujr|history]])
193546
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local anusvAra = U(0x0A82)
local visarga = U(0x0A83)
local virAma = U(0x0ACD)
local avagraha = "ઽ"
local consonants = "કખગઘઙચછજઝઞટઠડઢણતથદધનપફબભમયરલવળશષસહ"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["sa-Gujr"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "ઐ"},
{"au", "ઔ"},
{"ä", "અ"},
{"ö", "ઓ"},
{"ï", "ઇ"},
{"ü", "ઉ"},
{"a", "અ"},
{"ā", "આ"},
{"i", "ઇ"},
{"ī", "ઈ"},
{"u", "ઉ"},
{"ū", "ઊ"},
{"e", "𑆍"},
{"o", "ઓ"},
{"ṝ", "ૠ"},
{"ṛ", "ઋ"},
{"r̥", "ઋ"},
{"ḹ", "ૡ"},
{"ḷ", "ઌ"},
{"(અ)[%-/]([ઇઉ])", "%1%2"}, -- a-i, a-u for अइ, अउ; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "ખ"},
{"gh", "ઘ"},
{"ch", "છ"},
{"jh", "ઝ"},
{"ṭh", "ઠ"},
{"ḍh", "ઢ"},
{"th", "થ"},
{"dh", "ધ"},
{"ph", "ફ"},
{"bh", "ભ"},
{"h", "હ"},
-- Other stops.
{"k", "ક"},
{"g", "ગ"},
{"c", "ચ"},
{"j", "જ"},
{"ṭ", "ટ"},
{"ḍ", "ડ"},
{"t", "ત"},
{"d", "દ"},
{"p", "પ"},
{"b", "બ"},
-- Nasals.
{"ṅ", "ઙ"},
{"ñ", "ઞ"},
{"ṇ", "ણ"},
{"n", "ન"},
{"m", "મ"},
-- Remaining consonants.
{"y", "ય"},
{"r", "ર"},
{"l", "લ"},
{"v", "વ"},
{"ś", "શ"},
{"ṣ", "ષ"},
{"s", "સ"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["આ"] = U(0x0ABE),
["ઇ"] = U(0x0ABF),
["ઈ"] = U(0x0AC0),
["ઉ"] = U(0x0AC1),
["ઊ"] = U(0x0AC2),
["ઋ"] = U(0x0AC3),
["ૠ"] = U(0x0AC4),
["એ"] = U(0x0AC5),
["ઐ"] = U(0x0AC8),
["ઌ"] = U(0x0AE2),
["ૡ"] = U(0x0AE3),
["ઓ"] = U(0x0ACB),
["ઔ"] = U(0x0ACC),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["sa-Gujr"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["sa-Gujr"], {"(" .. consonant .. ")અ", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["sa-Gujr-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["lR"] = "ḷ",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
g0v6ea94a17vejpv62ufnws1ywrpkht
Module talk:typing-aids/data/sa-Modi
829
125584
193547
2024-11-21T10:53:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Modi]]
193547
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 65207654 || 2022-01-05T20:58:46Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x1163D) local visarga = U(0x1163E) local virAma = U(0x1163F) local zwj = U(0x200D) local avagraha = "ऽ" local consonants = "𑘎𑘏𑘐𑘑𑘒𑘓𑘔𑘕𑘖𑘗𑘘𑘙𑘚𑘛𑘜𑘝𑘞𑘟𑘠𑘡𑘢𑘣𑘤𑘥𑘦𑘧𑘨𑘩𑘪𑘫𑘬𑘭𑘮" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["sa-Modi"] = { -- Vowels and modifiers. Do the diphthong..."</nowiki>
|}
h2psox1vqyk0lzysinjq6zy3fajtrhg
Module:typing-aids/data/sa-Modi
828
125585
193548
2024-11-21T10:53:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Modi]] ([[Module talk:typing-aids/data/sa-Modi|history]])
193548
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0x1163D)
local visarga = U(0x1163E)
local virAma = U(0x1163F)
local zwj = U(0x200D)
local avagraha = "ऽ"
local consonants = "𑘎𑘏𑘐𑘑𑘒𑘓𑘔𑘕𑘖𑘗𑘘𑘙𑘚𑘛𑘜𑘝𑘞𑘟𑘠𑘡𑘢𑘣𑘤𑘥𑘦𑘧𑘨𑘩𑘪𑘫𑘬𑘭𑘮"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["sa-Modi"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑘋"},
{"au", "𑘍"},
{"ï", "𑘂"},
{"i", "𑘂"},
{"ī", "𑘃"},
{"ü", "𑘄"},
{"u", "𑘄"},
{"ū", "𑘅"},
{"a", "𑘀"},
{"ā", "𑘁"},
{"e", "𑘊"},
{"o", "𑘌"},
{"ṝ", "𑘇"},
{"ṛ", "𑘆"},
{"r̥", "𑘆"},
{"ḹ", "𑘉"},
{"ḷ", "𑘈"},
{"(𑘀)[%-/]([𑘂𑘄])", "%1%2"}, -- a-i, a-u for 𑘀𑘂, 𑘀𑘄; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑘏"},
{"gh", "𑘑"},
{"ch", "𑘔"},
{"jh", "𑘖"},
{"ṭh", "𑘙"},
{"ḍh", "𑘛"},
{"th", "𑘞"},
{"dh", "𑘠"},
{"ph", "𑘣"},
{"bh", "𑘥"},
{"h", "𑘮"},
-- Other stops.
{"k", "𑘎"},
{"g", "𑘐"},
{"c", "𑘓"},
{"j", "𑘕"},
{"ṭ", "𑘘"},
{"ḍ", "𑘚"},
{"t", "𑘝"},
{"d", "𑘟"},
{"p", "𑘢"},
{"b", "𑘤"},
-- Nasals.
{"ṅ", "𑘒"},
{"ñ", "𑘗"},
{"ṇ", "𑘜"},
{"n", "𑘡"},
{"m", "𑘦"},
-- Remaining consonants.
{"y", "𑘧"},
{"r", "𑘨"},
{"l", "𑘩"},
{"v", "𑘪"},
{"ś", "𑘫"},
{"ṣ", "𑘬"},
{"s", "𑘭"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["𑘁"] = U(0x11630),
["𑘂"] = U(0x11631),
["𑘃"] = U(0x11632),
["𑘄"] = U(0x11633),
["𑘅"] = U(0x11634),
["𑘆"] = U(0x11635),
["𑘇"] = U(0x11636),
["𑘈"] = U(0x11637),
["𑘉"] = U(0x11638),
["𑘊"] = U(0x11639),
["𑘋"] = U(0x1163A),
["𑘌"] = U(0x1163B),
["𑘍"] = U(0x1163C),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["sa-Modi"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["sa-Modi"], {"(" .. consonant .. ")𑘀", "%1"})
data["sa-Modi-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "u",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["LRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["LR"] = "ḷ",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
hph7afdh9g3ieemcaht2vtyp8v9q3xw
Module talk:typing-aids/data/sa-Shrd
829
125586
193549
2024-11-21T10:54:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Shrd]]
193549
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 81354085 || 2024-08-23T14:04:53Z || Svartava || <nowiki></nowiki>
|----
| 81354066 || 2024-08-23T14:01:39Z || Svartava || <nowiki></nowiki>
|----
| 81354060 || 2024-08-23T13:59:59Z || Svartava || <nowiki></nowiki>
|----
| 81354005 || 2024-08-23T13:53:29Z || Svartava || <nowiki>Created page with "local data = {} local U = require("Module:string/char") local anusvAra = U(0x11181) local visarga = U(0x11182) local virAma = U(0x111C0) local avagraha = "𑇁" local consonants = "𑆑𑆒𑆓𑆔𑆕𑆖𑆗𑆘𑆙𑆚𑆛𑆜𑆝𑆞𑆟𑆠𑆡𑆢𑆣𑆤𑆥𑆦𑆧𑆨𑆩𑆪𑆫𑆬𑆮𑆭𑆯𑆰𑆱𑆲" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["sa-Shrd"] = { -- Vowels and modifiers. Do the diphthongs an..."</nowiki>
|}
o2e8ykyhn8ova5pjdun4vtwfrc6tzh3
Module:typing-aids/data/sa-Shrd
828
125587
193550
2024-11-21T10:54:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Shrd]] ([[Module talk:typing-aids/data/sa-Shrd|history]])
193550
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local anusvAra = U(0x11181)
local visarga = U(0x11182)
local virAma = U(0x111C0)
local avagraha = "𑇁"
local consonants = "𑆑𑆒𑆓𑆔𑆕𑆖𑆗𑆘𑆙𑆚𑆛𑆜𑆝𑆞𑆟𑆠𑆡𑆢𑆣𑆤𑆥𑆦𑆧𑆨𑆩𑆪𑆫𑆬𑆮𑆭𑆯𑆰𑆱𑆲"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["sa-Shrd"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑆎"},
{"au", "𑆐"},
{"ä", "𑆃"},
{"ö", "𑆏"},
{"ï", "𑆅"},
{"ü", "𑆇"},
{"a", "𑆃"},
{"ā", "𑆄"},
{"i", "𑆅"},
{"ī", "𑆆"},
{"u", "𑆇"},
{"ū", "𑆈"},
{"e", "𑆍"},
{"o", "𑆏"},
{"ṝ", "𑆊"},
{"ṛ", "𑆉"},
{"r̥", "𑆉"},
{"ḹ", "𑆌"},
{"ḷ", "𑆋"},
{"(𑆃)[%-/]([𑆅𑆇])", "%1%2"}, -- a-i, a-u for अइ, अउ; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑆒"},
{"gh", "𑆔"},
{"ch", "𑆗"},
{"jh", "𑆙"},
{"ṭh", "𑆜"},
{"ḍh", "𑆞"},
{"th", "𑆡"},
{"dh", "𑆣"},
{"ph", "𑆦"},
{"bh", "𑆨"},
{"h", "𑆲"},
-- Other stops.
{"k", "𑆑"},
{"g", "𑆓"},
{"c", "𑆖"},
{"j", "𑆘"},
{"ṭ", "𑆛"},
{"ḍ", "𑆝"},
{"t", "𑆠"},
{"d", "𑆢"},
{"p", "𑆥"},
{"b", "𑆧"},
-- Nasals.
{"ṅ", "𑆕"},
{"ñ", "𑆚"},
{"ṇ", "𑆟"},
{"n", "𑆤"},
{"m", "𑆩"},
-- Remaining consonants.
{"y", "𑆪"},
{"r", "𑆫"},
{"l", "𑆬"},
{"v", "𑆮"},
{"ś", "𑆯"},
{"ṣ", "𑆰"},
{"s", "𑆱"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["𑆄"] = U(0x111B3),
["𑆅"] = U(0x111B4),
["𑆆"] = U(0x111B5),
["𑆇"] = U(0x111B6),
["𑆈"] = U(0x111B7),
["𑆉"] = U(0x111B8),
["𑆊"] = U(0x111B9),
["𑆋"] = U(0x111BA),
["𑆌"] = U(0x111BB),
["𑆍"] = U(0x111BC),
["𑆎"] = U(0x111BD),
["𑆏"] = U(0x111BE),
["𑆐"] = U(0x111BF),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["sa-Shrd"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["sa-Shrd"], {"(" .. consonant .. ")𑆃", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["sa-Shrd-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["lR"] = "ḷ",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
6mn7hzndjqe3v8qb7sau1d62ijj2byh
Module talk:typing-aids/data/sa-Sidd
829
125588
193551
2024-11-21T10:54:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Sidd]]
193551
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 80983144 || 2024-08-08T16:24:40Z || Kutchkutch || <nowiki></nowiki>
|----
| 80982914 || 2024-08-08T15:11:13Z || Svartava || <nowiki>Created page with "local data = {} local U = require("Module:string/char") local anusvAra = U(0x115BD) local visarga = U(0x115BE) local virAma = U(0x115BF) local consonants = "𑖎𑖏𑖐𑖑𑖒𑖓𑖔𑖕𑖖𑖗𑖘𑖙𑖚𑖛𑖜𑖝𑖞𑖟𑖠𑖡𑖢𑖣𑖤𑖥𑖦𑖧𑖨𑖩𑖪𑖫𑖬𑖭𑖮" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["sa-Sidd"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "..."</nowiki>
|}
fa9up7k9hw2o8vlcwmuo3owvlcfut44
Module:typing-aids/data/sa-Sidd
828
125589
193552
2024-11-21T10:54:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sa-Sidd]] ([[Module talk:typing-aids/data/sa-Sidd|history]])
193552
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local anusvAra = U(0x115BD)
local visarga = U(0x115BE)
local virAma = U(0x115BF)
local avagraha = "ऽ"
local consonants = "𑖎𑖏𑖐𑖑𑖒𑖓𑖔𑖕𑖖𑖗𑖘𑖙𑖚𑖛𑖜𑖝𑖞𑖟𑖠𑖡𑖢𑖣𑖤𑖥𑖦𑖧𑖨𑖩𑖪𑖫𑖬𑖭𑖮"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["sa-Sidd"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑖋"},
{"au", "𑖍"},
{"ä", "𑖀"},
{"ö", "𑖌"},
{"ï", "𑖂"},
{"ü", "𑖄"},
{"a", "𑖀"},
{"ā", "𑖁"},
{"i", "𑖂"},
{"ī", "𑖃"},
{"u", "𑖄"},
{"ū", "𑖅"},
{"e", "𑖊"},
{"o", "𑖌"},
{"ṝ", "𑖇"},
{"ṛ", "𑖆"},
{"r̥", "𑖆"},
{"ḹ", "𑖉"},
{"ḷ", "𑖈"},
{"(𑖀)[%-/]([𑖂𑖄])", "%1%2"}, -- a-i, a-u for अइ, अउ; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑖏"},
{"gh", "𑖑"},
{"ch", "𑖔"},
{"jh", "𑖖"},
{"ṭh", "𑖙"},
{"ḍh", "𑖛"},
{"th", "𑖞"},
{"dh", "𑖠"},
{"ph", "𑖣"},
{"bh", "𑖥"},
{"h", "𑖮"},
-- Other stops.
{"k", "𑖎"},
{"g", "𑖐"},
{"c", "𑖓"},
{"j", "𑖕"},
{"ṭ", "𑖘"},
{"ḍ", "𑖚"},
{"t", "𑖝"},
{"d", "𑖟"},
{"p", "𑖢"},
{"b", "𑖤"},
-- Nasals.
{"ṅ", "𑖒"},
{"ñ", "𑖗"},
{"ṇ", "𑖜"},
{"n", "𑖡"},
{"n", "𑖡"},
{"m", "𑖦"},
-- Remaining consonants.
{"y", "𑖧"},
{"r", "𑖨"},
{"l", "𑖩"},
{"v", "𑖪"},
{"ś", "𑖫"},
{"ṣ", "𑖬"},
{"s", "𑖭"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["𑖂"] = U(0x115B0),
["𑖃"] = U(0x115B1),
["𑖄"] = U(0x115B2),
["𑖅"] = U(0x115B3),
["𑖆"] = U(0x115B4),
["𑖇"] = U(0x115B5),
["𑖊"] = U(0x115B8),
["𑖋"] = U(0x115B9),
["𑖌"] = U(0x115BA),
["𑖍"] = U(0x115BB),
["𑖁"] = U(0x115AF),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["sa-Sidd"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["sa-Sidd"], {"(" .. consonant .. ")𑖀", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["sa-Sidd-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["lR"] = "ḷ",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
},
}
return data
13zugraqwh3guz0libbt84ghfrsodd6
Module talk:typing-aids/data/saz
829
125590
193553
2024-11-21T10:54:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/saz]]
193553
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 63853536 || 2021-09-05T05:24:06Z || Kutchkutch || <nowiki></nowiki>
|----
| 63833802 || 2021-09-03T17:17:34Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0xA880) local visarga = U(0xA882) local virAma = U(0xA8C4) local avagraha = "ऽ" local consonants = "ꢒꢓꢔꢕꢖꢗꢘꢙꢚꢛꢜꢝꢞꢟꢠꢡꢢꢣꢤꢥꢦꢧꢨꢩꢪꢫꢬꢭꢮꢯꢰꢱꢲꢳ" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["saz"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "ꢎ"}, {"au", "ꢑ"}, {"ä..."</nowiki>
|}
btw4xv3j0xlpy66zj8d09fv1ynjmg7j
Module:typing-aids/data/saz
828
125591
193554
2024-11-21T10:55:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/saz]] ([[Module talk:typing-aids/data/saz|history]])
193554
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0xA880)
local visarga = U(0xA882)
local hAru = U(0xA8B4)
local virAma = U(0xA8C4)
local avagraha = "ऽ"
local consonants = "ꢒꢓꢔꢕꢖꢗꢘꢙꢚꢛꢜꢝꢞꢟꢠꢡꢢꢣꢤꢥꢦꢧꢨꢩꢪꢫꢬꢭꢮꢯꢰꢱꢲꢳ"
local consonant = "[" .. consonants .. "]" .. hAru .. "?"
local acute = U(0x301) -- combining acute
data["saz"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "ꢎ"},
{"au", "ꢑ"},
{"ä", "ꢂ"},
{"ö", "ꢏ"},
{"ï", "ꢄ"},
{"ü", "ꢆ"},
{"a", "ꢂ"},
{"ā", "ꢃ"},
{"i", "ꢄ"},
{"ī", "ꢅ"},
{"u", "ꢆ"},
{"ū", "ꢇ"},
{"e", "ꢌ"},
{"ē", "ꢍ"},
{"o", "ꢏ"},
{"ō", "ꢐ"},
{"ṝ", "ꢉ"},
{"ṛ", "ꢈ"},
{"r̥", "ꢈ"},
{"ḹ", "ꢋ"},
{"l̥", "ꢊ"},
{"(ꢂ)[%-/]([ꢄꢆ])", "%1%2"}, -- a-i, a-u for ꢂꢄ, ꢂꢆ; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "ꢓ"},
{"gh", "ꢕ"},
{"ch", "ꢘ"},
{"jh", "ꢚ"},
{"ṭh", "ꢝ"},
{"ḍh", "ꢟ"},
{"th", "ꢢ"},
{"dh", "ꢤ"},
{"ph", "ꢧ"},
{"bh", "ꢩ"},
{"h", "ꢲ"},
-- Other stops.
{"k", "ꢒ"},
{"g", "ꢔ"},
{"c", "ꢗ"},
{"j", "ꢙ"},
{"ṭ", "ꢜ"},
{"ḍ", "ꢞ"},
{"t", "ꢡ"},
{"d", "ꢣ"},
{"p", "ꢦ"},
{"b", "ꢨ"},
-- Hāru.
{"n̤", "ꢥ" .. hAru},
{"m̤", "ꢪ" .. hAru},
{"r̤", "ꢬ" .. hAru},
{"l̤", "ꢭ" .. hAru},
-- Nasals.
{"ṅ", "ꢖ"},
{"ñ", "ꢛ"},
{"ṇ", "ꢠ"},
{"n", "ꢥ"},
{"n", "ꢥ"},
{"m", "ꢪ"},
-- Remaining consonants.
{"y", "ꢫ"},
{"r", "ꢬ"},
{"l", "ꢭ"},
{"v", "ꢮ"},
{"ś", "ꢯ"},
{"ṣ", "ꢰ"},
{"s", "ꢱ"},
{"ḷ", "ꢳ"},
{"ṃ", anusvAra},
{"ḥ", visarga},
{"'", avagraha},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["ꢄ"] = U(0xA8B6),
["ꢆ"] = U(0xA8B8),
["ꢈ"] = U(0xA8BA),
["ꢊ"] = U(0xA8BC),
["ꢌ"] = U(0xA8BE),
["ꢍ"] = U(0xA8BF),
["ꢏ"] = U(0xA8C1),
["ꢐ"] = U(0xA8C2),
["ꢃ"] = U(0xA8B5),
["ꢅ"] = U(0xA8B7),
["ꢇ"] = U(0xA8B9),
["ꢉ"] = U(0xA8BB),
["ꢋ"] = U(0xA8BD),
["ꢎ"] = U(0xA8C0),
["ꢑ"] = U(0xA8C3),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["saz"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["saz"], {"(" .. consonant .. ")ꢂ", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["saz-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["E"] = "ē",
["O"] = "ō",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["H"] = "ḥ",
["lRR"] = "ḹ",
["/"] = acute,
},
[2] = {
["n:"] = "n̤",
["m:"] = "m̤",
["r:"] = "r̤",
["l:"] = "l̤",
["lR"] = "l̥",
["RR"] = "ṝ",
},
[3] = {
["R"] = "ṛ",
["L"] = "ḷ",
},
}
return data
7av71wskp34yezz4wuzngnwqns6a37v
Module talk:typing-aids/data/sd
829
125592
193555
2024-11-21T10:55:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sd]]
193555
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 65208660 || 2022-01-06T00:32:29Z || Kutchkutch || <nowiki></nowiki>
|----
| 65207872 || 2022-01-05T21:50:12Z || Kutchkutch || <nowiki></nowiki>
|----
| 65207777 || 2022-01-05T21:21:06Z || Kutchkutch || <nowiki></nowiki>
|----
| 65207730 || 2022-01-05T21:13:38Z || Kutchkutch || <nowiki></nowiki>
|----
| 63844223 || 2021-09-04T23:30:55Z || Kutchkutch || <nowiki></nowiki>
|----
| 63835304 || 2021-09-03T19:30:52Z || Kutchkutch || <nowiki></nowiki>
|----
| 63835027 || 2021-09-03T19:13:28Z || Kutchkutch || <nowiki></nowiki>
|----
| 63834743 || 2021-09-03T18:57:33Z || Kutchkutch || <nowiki>Created page with "local data = {} local U = mw.ustring.char local anusvAra = U(0x112DF) local virAma = U(0x112EA) local nuktA = U(0x112E9) local consonants = "𑊺𑊻𑊼𑊽𑊾𑊿𑋀𑋁𑋂𑋃𑋄𑋅𑋆𑋇𑋈𑋉𑋊𑋋𑋌𑋍𑋎𑋏𑋐𑋑𑋒𑋓𑋔𑋕𑋖𑋗𑋘𑋙𑋚𑋛𑋜𑋝𑋞" local consonant = "[" .. consonants .. "]" .. nuktA .. "?" local acute = U(0x301) -- combining acute data["sd"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first..."</nowiki>
|}
3oad1ij6gkxbe49t91ydaur9os489cn
Module:typing-aids/data/sd
828
125593
193556
2024-11-21T10:55:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sd]] ([[Module talk:typing-aids/data/sd|history]])
193556
Scribunto
text/plain
local data = {}
local U = mw.ustring.char
local anusvAra = U(0x112DF)
local virAma = U(0x112EA)
local nuktA = U(0x112E9)
local consonants = "𑊺𑊻𑊼𑊽𑊾𑊿𑋀𑋁𑋂𑋃𑋄𑋅𑋆𑋇𑋈𑋉𑋊𑋋𑋌𑋍𑋎𑋏𑋐𑋑𑋒𑋓𑋔𑋕𑋖𑋗𑋘𑋙𑋚𑋛𑋜𑋝𑋞"
local consonant = "[" .. consonants .. "]" .. nuktA .. "?"
local acute = U(0x301) -- combining acute
data["sd"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑊷"},
{"au", "𑊹"},
{"ä", "𑊰"},
{"ö", "𑊸"},
{"ï", "𑊲"},
{"ü", "𑊴"},
{"a", "𑊰"},
{"ā", "𑊱"},
{"i", "𑊲"},
{"ī", "𑊳"},
{"u", "𑊴"},
{"ū", "𑊵"},
{"e", "𑊶"},
{"o", "𑊸"},
{"(𑊰)[%-/]([𑊲𑊴])", "%1%2"}, -- a-i, a-u for 𑊰𑊲, 𑊰𑊴; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑊻"},
{"gh", "𑊾"},
{"ch", "𑋁"},
{"jh", "𑋄"},
{"ṭh", "𑋇"},
{"ḍh", "𑋋"},
{"th", "𑋎"},
{"dh", "𑋐"},
{"ph", "𑋓"},
{"bh", "𑋖"},
{"h", "𑋞"},
-- Implosives.
{"g̈", "𑊽"},
{"j̈", "𑋃"},
{"d̤", "𑋉"},
{"b̤", "𑋕"},
-- Consonants with nukta.
{"q", "𑊺𑋩"},
{"x", "𑊻𑋩"},
{"ġ", "𑊼𑋩"},
{"z", "𑋂𑋩"},
{"f", "𑋓𑋩"},
-- Other stops.
{"k", "𑊺"},
{"g", "𑊼"},
{"c", "𑋀"},
{"j", "𑋂"},
{"ṭ", "𑋆"},
{"ḍ", "𑋈"},
{"ṛ", "𑋊"},
{"t", "𑋍"},
{"d", "𑋏"},
{"p", "𑋒"},
{"b", "𑋔"},
-- Nasals.
{"ṅ", "𑊿"},
{"ñ", "𑋅"},
{"ṇ", "𑋌"},
{"n", "𑋑"},
{"n", "𑋑"},
{"m", "𑋗"},
-- Remaining consonants.
{"y", "𑋘"},
{"r", "𑋙"},
{"l", "𑋚"},
{"v", "𑋛"},
{"ś", "𑋜"},
{"s", "𑋝"},
{"ṃ", anusvAra},
-- This rule must be applied twice because a consonant may only be in one capture per operation,
-- so "CCC" will only recognize the first two consonants. Must follow all consonant conversions.
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")(" .. consonant .. ")", "%1" .. virAma .. "%2"},
{"(" .. consonant .. ")$", "%1" .. virAma},
{acute, ""},
}
local vowels = {
["𑊲"] = U(0x112E1),
["𑊴"] = U(0x112E3),
["𑊶"] = U(0x112E5),
["𑊸"] = U(0x112E7),
["𑊱"] = U(0x112E0),
["𑊳"] = U(0x112E2),
["𑊵"] = U(0x112E4),
["𑊷"] = U(0x112E6),
["𑊹"] = U(0x112E8),
}
-- Convert independent vowels to diacritics after consonants. Must go after all consonant conversions.
for independentForm, diacriticalForm in pairs(vowels) do
table.insert(data["sd"], {"(" .. consonant .. ")" .. independentForm, "%1" .. diacriticalForm})
end
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["sd"], {"(" .. consonant .. ")𑊰", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["sd-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["G"] = "ṅ",
["S"] = "ś",
["M"] = "ṃ",
["/"] = acute,
},
[2] = {
["_gh_"] = "ġ",
["_g_"] = "g̈",
["_j_"] = "j̈",
["_d_"] = "d̤",
["_b_"] = "b̤",
["R"] = "ṛ",
},
}
return data
rnslnig1p8vabzrek8o6i61bofzfeic
Module talk:typing-aids/data/sgh-Cyrl
829
125594
193557
2024-11-21T10:55:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sgh-Cyrl]]
193557
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 76773417 || 2023-11-25T02:50:04Z || Victar || <nowiki>Undo revision [[Special:Diff/76773410|76773410]] by [[Special:Contributions/Victar|Victar]] ([[User talk:Victar|talk]])</nowiki>
|----
| 76773410 || 2023-11-25T02:47:49Z || Victar || <nowiki></nowiki>
|----
| 76773397 || 2023-11-25T02:44:41Z || Victar || <nowiki></nowiki>
|----
| 76773384 || 2023-11-25T02:41:18Z || Victar || <nowiki></nowiki>
|----
| 76773104 || 2023-11-25T01:17:42Z || Victar || <nowiki></nowiki>
|----
| 76773095 || 2023-11-25T01:16:42Z || Victar || <nowiki></nowiki>
|----
| 76773067 || 2023-11-25T01:10:01Z || Victar || <nowiki></nowiki>
|----
| 76772949 || 2023-11-25T00:50:24Z || Victar || <nowiki></nowiki>
|----
| 76772865 || 2023-11-25T00:36:33Z || Victar || <nowiki>Created page with "local U = mw.ustring.char local caron = U(0x30C) local circumflex = U(0x302) local macron = U(0x0AF) local ring = U(0x30A) local data = { { ["ā" .. macron] = "а̄", -- ā ["ɣ" .. caron] = "г̌", -- ɣ̌ ["e" .. macron] = "е", -- ē ["e" .. circumflex] = "е̂", -- ê ["z" .. caron] = "з̌", -- ž ["i" .. macron] = "ӣ", -- ī ["o" .. macron] = "о", -- ō ["u" .. macron] = "ӯ", -- ū ["u" .. macron .. ring] = "у̊", -- ū̊ ["x" .. caron]..."</nowiki>
|}
hlcritywmoiy1bqlal0ubi9ten0hlx2
Module:typing-aids/data/sgh-Cyrl
828
125595
193558
2024-11-21T10:55:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sgh-Cyrl]] ([[Module talk:typing-aids/data/sgh-Cyrl|history]])
193558
Scribunto
text/plain
local U = mw.ustring.char
local caron = U(0x30C) -- caron
local circumflex = U(0x302) -- circumflex
local diaeresis = U(0x308) -- diaeresis
local macron = U(0x304) -- macron
local ring_above = U(0x30A) -- ring above
local data = {
{
["ā" .. macron] = "а̄", -- ā
["ɣ" .. caron] = "г̌", -- ɣ̌
["e" .. macron] = "е", -- ē
["e" .. circumflex] = "е̂", -- ê
["e" .. circumflex] = "е̂", -- ê
["a" .. diaeresis .. macron] = "е̂", -- ǟ
["z" .. caron] = "з̌", -- ž
["i" .. macron] = "ӣ", -- ī
["o" .. macron] = "о", -- ō
["u" .. macron] = "ӯ", -- ū
["u" .. macron .. ring_above] = "у̊", -- ū̊
["x" .. caron] = "х̌", -- x̌
["c" .. caron] = "ч", -- č
["j" .. caron] = "ҷ", -- ǰ
["s" .. caron] = "ш", -- š
},
{
["a"] = "а",
["ā"] = "а̄",
["b"] = "б",
["v"] = "в",
["w"] = "в̌",
["g"] = "г",
["ɣ"] = "ғ",
["ɣ̌"] = "г̌",
["d"] = "д",
["δ"] = "д̌",
["ē"] = "е",
["ê"] = "е̂", ["ǟ"] = "е̂",
["z"] = "з",
["ž"] = "з̌",
["i"] = "и",
["ī"] = "ӣ",
["y"] = "й",
["k"] = "к",
["q"] = "қ",
["l"] = "л",
["m"] = "м",
["n"] = "н",
["ō"] = "о",
["p"] = "п",
["r"] = "р",
["s"] = "с",
["t"] = "т",
["θ"] = "т̌",
["u"] = "у",
["ū"] = "ӯ",
["ū̊"] = "у̊",
["f"] = "ф",
["x"] = "х",
["h"] = "ҳ",
["x̌"] = "х̌",
["c"] = "ц",
["č"] = "ч",
["ǰ"] = "ҷ",
["š"] = "ш",
},
}
-- Add replacements for capitals: both an all-caps version ("JA")
-- and capitalized version ("Ja").
for _, replacements in ipairs(data) do
-- sortedPairs saves the list of table keys so that we can modify the table
-- while iterating over it.
for text, replacement in require "Module:table".sortedPairs(replacements) do
replacement = mw.ustring.upper(replacement)
replacements[mw.ustring.upper(text)] = replacement
replacements[mw.ustring.gsub(text, "^.", mw.ustring.upper)] = replacement
end
end
return data
p6oxg2d5b9g0u6ou26oazgbv8sgxbtv
Module talk:typing-aids/data/skr
829
125596
193559
2024-11-21T10:55:57Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/skr]]
193559
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 82307961 || 2024-10-07T07:35:59Z || Svartava || <nowiki>Created page with "local data = {} local U = require("Module:string/char") local consonants = "𑊄𑊅𑊆𑊈𑊊𑊋𑊌𑊏𑊐𑊑𑊒𑊔𑊕𑊖𑊗𑊘𑊙𑊚𑊛𑊜𑊝𑊟𑊠𑊢𑊣𑊤𑅰𑊦𑊧" local consonant = "[" .. consonants .. "]" local acute = U(0x301) -- combining acute data["skr"] = { -- Vowels and modifiers. Do the diphthongs and diaereses first. {"ai", "𑊁"}, {"au", "𑊂"}, {"ä", "𑊀"}, -- {"ö", ""}, {"ï", "𑊁"}, {"ü", "𑊂"}, {"a", "..."</nowiki>
|}
971uvf9l5mb2ro8hkj0pc0ya636a9wx
Module:typing-aids/data/skr
828
125597
193560
2024-11-21T10:56:07Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/skr]] ([[Module talk:typing-aids/data/skr|history]])
193560
Scribunto
text/plain
local data = {}
local U = require("Module:string/char")
local consonants = "𑊄𑊅𑊆𑊈𑊊𑊋𑊌𑊏𑊐𑊑𑊒𑊔𑊕𑊖𑊗𑊘𑊙𑊚𑊛𑊜𑊝𑊟𑊠𑊢𑊣𑊤𑅰𑊦𑊧"
local consonant = "[" .. consonants .. "]"
local acute = U(0x301) -- combining acute
data["skr"] = {
-- Vowels and modifiers. Do the diphthongs and diaereses first.
{"ai", "𑊁"},
{"au", "𑊂"},
{"ä", "𑊀"},
-- {"ö", ""},
{"ï", "𑊁"},
{"ü", "𑊂"},
{"a", "𑊀"},
{"ā", "𑊀"},
{"i", "𑊁"},
{"ī", "𑊁"},
{"u", "𑊂"},
{"ū", "𑊂"},
{"e", "𑊃"},
-- {"o", ""},
{"(𑊀)[%-/]([𑊁𑊂])", "%1%2"}, -- a-i, a-u for 𑊀𑊁, 𑊀𑊂; must follow rules for "ai", "au"
-- Two-letter consonants must go before h.
{"kh", "𑊅"},
{"gh", "𑊈"},
-- {"ch", "𑊋"},
{"jh", ""},
{"ṭh", "𑊑"},
{"ḍh", "𑊔"},
{"th", "𑊗"},
{"dh", "𑊙"},
{"ph", "𑊜"},
{"bh", "𑊟"},
{"h", "𑊦"},
-- Other stops.
{"k", "𑊄"},
{"g", "𑊆"},
{"c", "𑊊"},
{"j", "𑊌"},
{"ṭ", "𑊐"},
{"ḍ", "𑊒"},
{"ṛ", "𑊧"},
{"t", "𑊖"},
{"d", "𑊘"},
{"p", "𑊛"},
{"b", "𑊝"},
-- Nasals.
{"ñ", "𑊏"},
{"ṇ", "𑊕"},
{"n", "𑊚"},
{"m", "𑊠"},
-- Remaining consonants.
{"y", "𑊌"},
{"r", "𑊢"},
{"l", "𑊣"},
{"v", "𑊤"},
-- {"ś", ""},
{"s", "𑊥"},
-- {"ṣ", ""},
-- {"ṃ", ""},
}
-- This must go last, after independent vowels are converted to diacritics, or "aï", "aü" won't work.
table.insert(data["skr"], {"(" .. consonant .. ")𑊀", "%1"})
-- [[w:Harvard-Kyoto]] to [[w:International Alphabet of Sanskrit Transliteration]]
data["skr-tr"] = {
[1] = {
["A"] = "ā",
["I"] = "ī",
["U"] = "ū",
["J"] = "ñ",
["T"] = "ṭ",
["D"] = "ḍ",
["N"] = "ṇ",
["z"] = "ś",
["S"] = "ṣ",
["M"] = "ṃ",
["/"] = acute,
},
[2] = {
["R"] = "ṛ",
},
}
return data
68nqq1vwzohzr6vl20fnmxao2bz5oqd
Module talk:typing-aids/data/sux
829
125598
193561
2024-11-21T10:56:17Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sux]]
193561
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 53396749 || 2019-06-21T16:08:20Z || Erutuon || <nowiki>localize variable</nowiki>
|----
| 48425972 || 2018-01-16T20:33:52Z || Erutuon || <nowiki>a start</nowiki>
|}
m0435ic30ly76xswcd45lzjzjgp3hbu
Module:typing-aids/data/sux
828
125599
193562
2024-11-21T10:56:27Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/sux]] ([[Module talk:typing-aids/data/sux|history]])
193562
Scribunto
text/plain
local replacements = {}
replacements["sux"] = {
-- This converts from regular to diacriticked characters, before the
-- shortcuts below are processed.
-- The apostrophe is used in place of an acute, and a backslash \ in place of
-- a grave. ^ is replaced with háček or breve if it follows certain
-- consonants.
["pre"] = {
["a'"] = "á", ["a\\"] = "à",
["e'"] = "é", ["e\\"] = "è",
["i'"] = "í", ["i\\"] = "ì",
["u'"] = "ú", ["u\\"] = "ù",
["g~"] = "g̃",
["s^"] = "š", ["h^"] = "ḫ", ["r^"] = "ř",
},
-- V
["a"] = "𒀀", ["á"] = "𒀉",
-- CV
-- VC
-- VCV
}
--[[
replacements["sux-tr"] = {
}
--]]
return replacements
4g0vz1rmnvvpsvj6vryxu0ewh1lp6l0
ප්රවර්ගය:භාෂාව අනුව Terms by prefix
14
125600
193564
2024-11-21T10:56:34Z
Lee
19
Lee විසින් [[ප්රවර්ගය:භාෂාව අනුව Terms by prefix]] සිට [[ප්රවර්ගය:භාෂාව අනුව යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
193564
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:භාෂාව අනුව යෙදුම්, උපසර්ග අනුව]]
8jdv6whev9267p5yrkmeulzlpg0m97s
Module talk:typing-aids/data/yah-Cyrl
829
125601
193565
2024-11-21T10:56:37Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/yah-Cyrl]]
193565
wikitext
text/x-wiki
{| class="wikitable"
! oldid || date/time || username || edit summary
|----
| 76783797 || 2023-11-26T06:31:18Z || Victar || <nowiki></nowiki>
|----
| 76782000 || 2023-11-26T01:18:51Z || Victar || <nowiki>@[[User:Erutuon]] is ["^e"] = "э" how i set the char to э if e is word-initial?</nowiki>
|----
| 76781916 || 2023-11-26T00:48:42Z || Victar || <nowiki></nowiki>
|----
| 76781909 || 2023-11-26T00:47:05Z || Victar || <nowiki></nowiki>
|----
| 76781905 || 2023-11-26T00:45:45Z || Victar || <nowiki></nowiki>
|----
| 76781888 || 2023-11-26T00:38:19Z || Victar || <nowiki>Created page with "local U = mw.ustring.char local acute = U(0x301) -- acute local caron = U(0x30C) -- caron local macron = U(0x304) -- macron local ring_above = U(0x30A) -- ring above local data = { { ["ā" .. macron] = "а̄", -- ā ["g" .. ring_above] = "г̊", -- g̊ ["g" .. acute] = "ѓ", -- ǵ ["ɣ" .. ring_above] = "ғ̊", -- ɣ̊ ["ɣ" .. caron] = "г̌", -- ɣ̌ ["z" .. caron] = "ж", -- ž ["k" .. ring_above] = "к̊", -- к̊ ["k" .. acute] = "ќ", -- ḱ [..."</nowiki>
|}
0uamo49rdxt0xc7q9cqmiadj25mpebn
Module:typing-aids/data/yah-Cyrl
828
125602
193566
2024-11-21T10:56:47Z
Pinthura
2424
Moved page from [[en:Module:typing-aids/data/yah-Cyrl]] ([[Module talk:typing-aids/data/yah-Cyrl|history]])
193566
Scribunto
text/plain
local U = mw.ustring.char
local acute = U(0x301) -- acute
local caron = U(0x30C) -- caron
local macron = U(0x304) -- macron
local ring_above = U(0x30A) -- ring above
local data = {
{
["a" .. macron] = "а̄", -- ā
["g" .. ring_above] = "г̊", -- g̊
["g" .. acute] = "ѓ", -- ǵ
["ɣ" .. ring_above] = "ғ̊", -- ɣ̊
["ɣ" .. caron] = "г̌", -- ɣ̌
["z" .. caron] = "ж", -- ž
["k" .. ring_above] = "к̊", -- k̊
["k" .. acute] = "ќ", -- ḱ
["q" .. ring_above] = "қ̊", -- q̊
["x" .. ring_above] = "х̊", -- x̊
["x" .. caron] = "х̌", -- x̌
["x" .. caron .. ring_above] = "х̌̊", -- x̌̊
["c" .. caron] = "ч", -- č
["j" .. caron] = "ҷ", -- ǰ
["s" .. caron] = "ш", -- š
["^e"] = "э",
},
{
["a"] = "а",
["ā"] = "а̄",
["b"] = "б",
["v"] = "в",
["w"] = "в̌",
["g"] = "г",
["ǵ"] = "ѓ",
["ɣ"] = "ғ",
["d"] = "д",
["δ"] = "д̌",
["e"] = "е",
["ə"] = "ә",
["ž"] = "ж",
["z"] = "з",
["i"] = "и",
["y"] = "й",
["k"] = "к",
["ḱ"] = "ќ",
["q"] = "қ",
["l"] = "л",
["m"] = "м",
["n"] = "н",
["o"] = "о",
["p"] = "п",
["r"] = "р",
["s"] = "с",
["t"] = "т",
["θ"] = "т̌",
["u"] = "у",
["f"] = "ф",
["x"] = "х",
["h"] = "ҳ",
["c"] = "ц",
["č"] = "ч",
["ǰ"] = "ҷ",
["š"] = "ш",
},
}
-- Add replacements for capitals: both an all-caps version ("JA")
-- and capitalized version ("Ja").
for _, replacements in ipairs(data) do
-- sortedPairs saves the list of table keys so that we can modify the table
-- while iterating over it.
for text, replacement in require "Module:table".sortedPairs(replacements) do
replacement = mw.ustring.upper(replacement)
replacements[mw.ustring.upper(text)] = replacement
replacements[mw.ustring.gsub(text, "^.", mw.ustring.upper)] = replacement
end
end
return data
3xxwqiw4g3cpeglxttixviofzt9eurz
ප්රවර්ගය:Japanese terms by prefix
14
125603
193568
2024-11-21T10:58:14Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Japanese terms by prefix]] සිට [[ප්රවර්ගය:ජපන් යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
193568
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ජපන් යෙදුම්, උපසර්ග අනුව]]
dwek35vv901wpz00gc5v3hfjvdwtrz6
ප්රවර්ගය:ජපන් යෙදුම්, නිරුක්තිය අනුව
14
125604
193569
2021-06-19T06:15:56Z
en>Dpleibovitz
0
{{auto cat}}
193569
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193570
193569
2024-11-21T10:59:05Z
Lee
19
[[:en:Category:Japanese_terms_by_etymology]] වෙතින් එක් සංශෝධනයක්
193569
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193571
193570
2024-11-21T10:59:58Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Japanese terms by etymology]] සිට [[ප්රවර්ගය:ජපන් යෙදුම්, නිරුක්තිය අනුව]] වෙත පිටුව ගෙන යන ලදී
193569
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:Japanese terms by etymology
14
125605
193572
2024-11-21T10:59:58Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Japanese terms by etymology]] සිට [[ප්රවර්ගය:ජපන් යෙදුම්, නිරුක්තිය අනුව]] වෙත පිටුව ගෙන යන ලදී
193572
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ජපන් යෙදුම්, නිරුක්තිය අනුව]]
8tawz0a8io4r1sruwaa1c3ut2oajbru
ප්රවර්ගය:ඉංග්රීසි terms by prefix
14
125606
193574
2024-11-21T11:04:20Z
Lee
19
Lee විසින් [[ප්රවර්ගය:ඉංග්රීසි terms by prefix]] සිට [[ප්රවර්ගය:ඉංග්රීසි යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
193574
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ඉංග්රීසි යෙදුම්, උපසර්ග අනුව]]
g4neuy9277fbs74bhyskghkl80wmf9d
ප්රවර්ගය:ලතින් යෙදුම්, උපසර්ග අනුව
14
125607
193575
2022-09-02T05:56:20Z
en>WingerBot
0
WingerBot moved page [[Category:Latin words by prefix]] to [[Category:Latin terms by prefix]] without leaving a redirect: rename 'words' -> 'terms' in affix and compound categories (see [[Wiktionary:Beer parlour/2022/August]])
193575
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193576
193575
2024-11-21T11:05:00Z
Lee
19
[[:en:Category:Latin_terms_by_prefix]] වෙතින් එක් සංශෝධනයක්
193575
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193577
193576
2024-11-21T11:06:02Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Latin terms by prefix]] සිට [[ප්රවර්ගය:ලතින් terms by prefix]] වෙත පිටුව ගෙන යන ලදී
193575
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193579
193577
2024-11-21T11:06:36Z
Lee
19
Lee විසින් [[ප්රවර්ගය:ලතින් terms by prefix]] සිට [[ප්රවර්ගය:ලතින් යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
193575
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:Latin terms by prefix
14
125608
193578
2024-11-21T11:06:03Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Latin terms by prefix]] සිට [[ප්රවර්ගය:ලතින් terms by prefix]] වෙත පිටුව ගෙන යන ලදී
193578
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ලතින් terms by prefix]]
tkyi7u917265vbozcrmacje7r8r9uv3
ප්රවර්ගය:ලතින් terms by prefix
14
125609
193580
2024-11-21T11:06:36Z
Lee
19
Lee විසින් [[ප්රවර්ගය:ලතින් terms by prefix]] සිට [[ප්රවර්ගය:ලතින් යෙදුම්, උපසර්ග අනුව]] වෙත පිටුව ගෙන යන ලදී
193580
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ලතින් යෙදුම්, උපසර්ග අනුව]]
1qapeqj7q0viae2z9kqno0i1s3yt151
ප්රවර්ගය:ලතින් උපසර්ග
14
125610
193581
2024-11-21T11:07:51Z
Pinthura
2424
සේවා: [[:[[en:Category:Latin prefixes]]]] තුළ තිබූ පෙළ මෙහි ඇතුළු කිරීම.
193581
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193582
193581
2024-11-21T11:08:01Z
Pinthura
2424
Pinthura විසින් [[ප්රවර්ගය:Latin prefixes]] සිට [[ප්රවර්ගය:ලතින් උපසර්ග]] වෙත පිටුව ගෙන යන ලදී: සේවා: නව ප්රවර්ග නාමය වෙත ගෙනයාම.
193581
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193584
193582
2024-11-21T11:08:11Z
Pinthura
2424
සේවා: ඉංග්රීසි ව්යාපෘතිය වෙත සබැඳියක් එක් කිරීම.
193584
wikitext
text/x-wiki
{{auto cat}}
[[en:Category:Latin prefixes]]
9h8tc7013pqn4k4f4qvjzjkx0csu0q1
ප්රවර්ගය:Latin prefixes
14
125611
193583
2024-11-21T11:08:01Z
Pinthura
2424
Pinthura විසින් [[ප්රවර්ගය:Latin prefixes]] සිට [[ප්රවර්ගය:ලතින් උපසර්ග]] වෙත පිටුව ගෙන යන ලදී: සේවා: නව ප්රවර්ග නාමය වෙත ගෙනයාම.
193583
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ලතින් උපසර්ග]]
6s53f6ynrcp1czbcxepd4y68r8eska0
193585
193583
2024-11-21T11:08:21Z
Pinthura
2424
සේවා: යළියොමුව, මෘදු ප්රවර්ග යළියොමුවක් බවට හැරවීම.
193585
wikitext
text/x-wiki
{{category redirect|ලතින් උපසර්ග}}
ekclis8egq4opo0yefofukns8xkyihh
ප්රවර්ගය:Latin උපසර්ග
14
125612
193586
2024-11-21T11:08:31Z
Pinthura
2424
සේවා: මෘදු ප්රවර්ග යළියොමුවක් නිර්මාණය.
193586
wikitext
text/x-wiki
{{category redirect|ලතින් උපසර්ග}}
ekclis8egq4opo0yefofukns8xkyihh
geometries
0
125613
193587
2024-11-21T11:16:37Z
Lee
19
නිර්මාණය
193587
wikitext
text/x-wiki
{{also|géométries}}
==English==
===Noun===
{{head|en|noun form}}
# {{plural of|en|geometry}}
===Anagrams===
* {{anagrams|en|a=eeegimorst|geometrise}}
==Catalan==
===Noun===
{{head|ca|noun form}}
# {{plural of|ca|geometria}}
jb6sfvchltj38v4tzdd89l97t73rdgp
geometry
0
125614
193588
2024-11-21T11:19:23Z
Lee
19
නිර්මාණය
193588
wikitext
text/x-wiki
==English==
===Etymology===
From {{inh|en|enm|gemetry}}, {{m|enm|geometrie}}, from {{der|en|fro|geometrie}} (modern {{cog|fr|géométrie}}),<ref>{{R:MED|id=MED18350|entry=ǧēmetrī(e|pos=n}}</ref> from {{der|en|la|geōmetria}}, from {{der|en|grc|γεωμετρία||geometry, land-survey}}, from {{m|grc|γεωμέτρης||land measurer}}, from {{m|grc|γῆ||earth, land, country}} + {{m|grc|-μετρία||measurement}}, from {{m|grc|μέτρον||a measure}}. {{surf|en|geo-|-metry}}. {{doublet|en|gematria}}.
===Pronunciation===
* {{IPA|en|/d͡ʒiːˈɒm.ɪ.tɹi/|/ˈd͡ʒɒm.ɪ.tɹi/|a=RP}}
* {{IPA|en|/d͡ʒiˈɑ.mə.tɹi/|a=GA}}
** {{audio|en|En-us-geometry.ogg|a=US}}
* {{IPA|en|/d͡ʒiːˈɔm.ə.tɹi/|a=AU}}
===Noun===
{{en-noun|~}}
# {{lb|en|mathematics|uncountable}} [[ජ්යාමිතිය]]
====Holonyms====
* {{l|en|mathematics}}
====Derived terms====
{{col-auto|en|synthetic geometry|affine differential geometry|anabelian geometry|analytical geometry|arithmetic geometry|birational geometry|complex geometry|computational geometry|conformal geometry|contact geometry|diophantine geometry|geometry of fear|imaginary geometry|inversive geometry|Kerr geometry|Klein geometry|noncommutative geometry|noneuclidean geometry|variable geometry
|absolute geometry
|affine geometry
|algebraic geometry
|analytic geometry
|chronogeometry
|combinatorial geometry
|descriptive geometry
|differential geometry
|elementary geometry
|elliptic geometry
|Euclidean geometry
|finite geometry
|fractal geometry
|geometry of numbers
|geometry shader
|hyperbolic geometry
|hypergeometry
|non-Euclidean geometry
|plane geometry
|pregeometry
|projective geometry
|Riemannian geometry
|sacred geometry
|spherical geometry
|taxicab geometry
|tropical geometry
|geometrize
|geometrician
|geometrodynamics
|supergeometry
|stereogeometry
|geometrogenesis
|macrogeometry
|microgeometry
|geometrist
|geometrylike
|astrogeometry
|ophthalmogeometry
|pangeometry
|neurogeometry
|metageometry
|synthetic geometry
}}
====Related terms====
* {{l|en|gematria}}
* {{l|en|geometer}}
* {{l|en|geometric}}
* {{l|en|geometrical}}
===See also===
* {{l|en|topology}}
===References===
{{reflist}}
9l1igvhywwkhze8u09o1vvapiheh7x1
ජ්යාමිතිය
0
125615
193589
2024-11-21T11:20:01Z
Lee
19
'== සිංහල == === නිරුක්තිය === {{rfe|si}} === නාම පදය === {{si-noun}} # {{rfdef|si}} ==== පරිවර්තන ==== {{trans-top|පරිවර්තන}} * ඉංග්රීසි: {{t|en|geometry}} {{trans-bottom}} <!-- === අමතර අවධානයට === * {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}} -->' යොදමින් නව පිටුවක් තනන ලදි
193589
wikitext
text/x-wiki
== සිංහල ==
=== නිරුක්තිය ===
{{rfe|si}}
=== නාම පදය ===
{{si-noun}}
# {{rfdef|si}}
==== පරිවර්තන ====
{{trans-top|පරිවර්තන}}
* ඉංග්රීසි: {{t|en|geometry}}
{{trans-bottom}}
<!--
=== අමතර අවධානයට ===
* {{l|si|<<ආශ්රිත පවතින වෙනත් වචන>>}}
-->
k6p6684acfec9rg2h198wj6vlmsjxsg
geometrie
0
125616
193590
2024-11-21T11:22:50Z
Lee
19
නිර්මාණය
193590
wikitext
text/x-wiki
{{also|Geometrie|géométrie}}
==Dutch==
{{wp|lang=nl}}
===Etymology===
From {{inh|nl|dum|geometrie}}, from {{der|nl|fro|geometrie}}, from {{der|nl|la|geōmetria}}, from {{der|nl|grc|γεωμετρία}}.
===Pronunciation===
* {{IPA|nl|/ˌɣeː.oː.meːˈtri/}}
* {{audio|nl|Nl-geometrie.ogg}}
* {{hyphenation|nl|geo|me|trie}}
* {{rhymes|nl|i}}
===Noun===
{{nl-noun|f|-|-}}
# [[geometry]] {{defdate|from 17th c.}}
====Synonyms====
* {{l|nl|meetkunde}}
* {{l|nl|meetkunst}}
====Derived terms====
* {{l|nl|geometrisch}}
==Italian==
===Noun===
{{head|it|noun form|g=f}}
# {{plural of|it|geometria}}
===Anagrams===
* {{anagrams|it|a=eeegimort|geotermie}}
==Middle French==
===Noun===
{{frm-noun|f|-}}
# [[geometry]]
==Old French==
===Noun===
{{fro-noun|f|-}}
# [[geometry]] {{gloss|branch of mathematics}}
==Romanian==
===Etymology===
{{bor+|ro|fr|géométrie}}, from {{der|ro|la|geometria}}. Equivalent to {{af|ro|geo-|-metrie}}.
===Noun===
{{ro-noun|f|-}}
# [[geometry]]
====Declension====
{{ro-noun-f-ie|geometri|n=sg}}
{{C|ro|Geometry}}
3qk3xkptsgoq5hbudhgat7m8ocir9vr
γεωμετρία
0
125617
193591
2024-11-21T11:23:36Z
Lee
19
නිර්මාණය
193591
wikitext
text/x-wiki
==Ancient Greek==
===Alternative forms===
* {{alter|grc|γαμετρία||dor}}
* {{alter|grc|γεωμετρίη||ion}}
===Etymology===
{{affix|grc|γεω-|t1=land|-μετρίᾱ|t2=measurement}}.
===Pronunciation===
{{grc-IPA|γεωμετρῐ́ᾱ}}
===Noun===
{{grc-noun|head=γεωμετρῐ́ᾱ|γεωμετρῐ́ᾱς|f|first}}
# {{lb|grc|math}} [[geometry]]
# [[land]] [[survey]]
# [[land tax]]
====Inflection====
{{grc-decl|γεωμετρῐ́ᾱ|γεωμετρῐ́ᾱς}}
====Related terms====
{{col-auto|grc
|γεωμετρέω
|γεωμέτρης
|γεωμέτρητος
|γεωμετρικός
}}
====Descendants====
* {{desc|el|γεωμετρία}}
* {{desc|la|geōmetria|bor=1}}
===References===
* {{R:LSJ}}
* {{R:Middle Liddell}}
* {{R:DGE}}
* {{R:Bailly|400}}
* {{R:grc:Brill|427}}
* {{R:Woodhouse}}
{{C|grc|Geometry|Surveying}}
==Greek==
===Noun===
{{el-noun|f|γεωμετρίες}}
# {{lb|el|mathematics}} [[geometry]]
====Declension====
{{el-nF-α-ες-2b|γεωμετρί|γεωμετρι}}
====Synonyms====
* {{l|el|γεωμ.}} {{qualifier|abbreviation}}
====See also====
{{see|el|μαθηματικά|g=n-p|gloss=mathematics}}
====Further reading====
* {{pedia|lang=el}}
l9rze5vzfzkitngkalj7wxwjywxf5z6
γεωμέτρης
0
125618
193592
2024-11-21T11:23:58Z
Lee
19
නිර්මාණය
193592
wikitext
text/x-wiki
==Ancient Greek==
===Etymology===
{{root|grc|ine-pro|*meh₁-}}
From {{af|grc|γεω-|t1=earth|μετρέω|t2=to measure|-ης}}.
===Pronunciation===
{{grc-IPA}}
===Noun===
{{grc-noun|γεωμέτρου|m|first}}
# [[surveyor]]
# [[geometer]]
====Declension====
{{grc-decl|γεωμέτρης|ου}}
====Related terms====
{{col-auto|grc
|γεωμετρέω
|γεωμέτρητος
|γεωμετρίᾱ
|γεωμετρικός
}}
====Descendants====
* {{desc|el|γεωμέτρης}}
* {{desc|la|geōmetrēs|bor=1}} {{see desc}}
* {{desc|ka|გეომეტრი|bor=1}}
===Further reading===
* {{R:Bailly}}
* {{R:DGE}}
* {{R:Logeion}}
* {{R:LSJ}}
* {{R:Woodhouse}}
{{C|grc|Geometry|Occupations|Surveying}}
r2yxg3oxyt7nylyhk52xbnujnxb3z6m
-μετρία
0
125619
193593
2024-11-21T11:24:16Z
Lee
19
නිර්මාණය
193593
wikitext
text/x-wiki
==Ancient Greek==
===Etymology===
From {{af|grc|μέτρον|t1=measurement|-ίᾱ}}.
===Suffix===
{{grc-noun|-μετρίᾱ|-μετρίᾱς|f|first}}
# {{n-g|forms noun pertaining to measurement}}: [[-metry]]
====Derived terms====
{{suffixsee|grc}}
{{col-auto|grc
|ἀμετρίᾱ
|ἀσυμμετρίᾱ
|ἐμμετρίᾱ
|ἰσομετρίᾱ
|σιτομετρίᾱ
|στερεομετρίᾱ
|ὑπερμετρίᾱ
|ψῑλομετρίᾱ
}}
====Descendants====
* {{desctree|la|-metria}}
dibuubkfov6pnim68pbsbsz49j9nfo5
μέτρον
0
125620
193594
2024-11-21T11:24:35Z
Lee
19
නිර්මාණය
193594
wikitext
text/x-wiki
==Ancient Greek==
===Etymology===
{{root|grc|ine-pro|*meh₁-}}
From {{der|grc|ine-pro|*meh₁-|t=to measure}} + {{af|grc|-τρον}}.<ref>{{R:grc:Beekes|939-40|μέτρον}}</ref>
===Pronunciation===
{{grc-IPA}}
===Noun===
{{grc-noun|μέτρου|n|second}}
# something used to measure: [[measure]], [[rule]], [[weight]]
# [[length]], [[width]], [[breadth]]
# {{lb|grc|music|poetry}} [[metre#Etymology 2|metre]]
====Inflection====
{{grc-decl|μέτρον|ου}}
====Derived terms====
{{col-auto|grc
|μετρέω
|μετρικός
|μέτριος
|τετρᾰ́μετρος
|-μετρία
}}
====Descendants====
* {{desc|el|μέτρο}}
* {{desc|en|metron|bor=1}}
* {{desctree|la|metrum|bor=1}}
===References===
<references />
===Further reading===
* {{R:LSJ}}
* {{R:Middle Liddell}}
* {{R:Autenrieth}}
* {{R:Bailly}}
* {{R:BDAG}}
* {{R:Cunliffe}}
* {{R:Slater}}
* {{R:Strong's|G|3358}}
* {{R:Woodhouse}}
m3i1nbs9j58bwjmfdkla2ijtgljohg0
geo-
0
125621
193595
2024-11-21T11:26:39Z
Lee
19
'{{also|geo|Geo|GEO|geó|géo|Geo.|géo-}} ==English== ===Etymology=== From {{derived|en|grc|γεω-|}}, combining form of {{m|grc|γῆ||earth}}. ===Pronunciation=== * {{IPA|en|/ˈd͡ʒiː.əʊ/|a=UK}} ** {{audio|en|LL-Q1860 (eng)-Vealhurl-geo-.wav|a=Southern England}} * {{IPA|en|/ˈd͡ʒi.oʊ/|a=US,CA}} * {{IPA|en|/ˈd͡ʒiː.əʉ/|a=AU}} * {{rhymes|en|iəʊ|s=2}} ===Prefix=== {{en-prefix}} # මහපොළො...' යොදමින් නව පිටුවක් තනන ලදි
193595
wikitext
text/x-wiki
{{also|geo|Geo|GEO|geó|géo|Geo.|géo-}}
==English==
===Etymology===
From {{derived|en|grc|γεω-|}}, combining form of {{m|grc|γῆ||earth}}.
===Pronunciation===
* {{IPA|en|/ˈd͡ʒiː.əʊ/|a=UK}}
** {{audio|en|LL-Q1860 (eng)-Vealhurl-geo-.wav|a=Southern England}}
* {{IPA|en|/ˈd͡ʒi.oʊ/|a=US,CA}}
* {{IPA|en|/ˈd͡ʒiː.əʉ/|a=AU}}
* {{rhymes|en|iəʊ|s=2}}
===Prefix===
{{en-prefix}}
# [[මහපොළොව]]
#: {{hyper|en|planeto-}}
# [[geography]]
====Derived terms====
{{prefixsee|en}}
{{col4|en
|geocaching
|geocentric
|geode
|geodesic
|geodesy
|geofabric
|geographer
|geographic
|geographical
|geography
|geologer
|geological
|geologist
|geology
|geomancy
|geomembrane
|geometer
|geometric
|geometrical
|geometry
|geomorphology
|geophysics
|geopolitical
|geopolitics
|geopressure
|geostationary
|geotechnical
|geotextile
|geothermal
|geotropism
|geo-block
|geo-blocking
|geocontent
|geofence
|geofencing
|geo-imputation
|geomap
|geo-military
|geospatial
}}
lgvpj61qve5ro1905a7v9wlj59upf9h
193596
193595
2024-11-21T11:26:59Z
Lee
19
193596
wikitext
text/x-wiki
{{also|geo|Geo|GEO|geó|géo|Geo.|géo-}}
==English==
===Etymology===
From {{derived|en|grc|γεω-|}}, combining form of {{m|grc|γῆ||earth}}.
===Pronunciation===
* {{IPA|en|/ˈd͡ʒiː.əʊ/|a=UK}}
** {{audio|en|LL-Q1860 (eng)-Vealhurl-geo-.wav|a=Southern England}}
* {{IPA|en|/ˈd͡ʒi.oʊ/|a=US,CA}}
* {{IPA|en|/ˈd͡ʒiː.əʉ/|a=AU}}
* {{rhymes|en|iəʊ|s=2}}
===Prefix===
{{en-prefix}}
# [[මහ පොළොව]]
#: {{hyper|en|planeto-}}
# [[geography]]
====Derived terms====
{{prefixsee|en}}
{{col4|en
|geocaching
|geocentric
|geode
|geodesic
|geodesy
|geofabric
|geographer
|geographic
|geographical
|geography
|geologer
|geological
|geologist
|geology
|geomancy
|geomembrane
|geometer
|geometric
|geometrical
|geometry
|geomorphology
|geophysics
|geopolitical
|geopolitics
|geopressure
|geostationary
|geotechnical
|geotextile
|geothermal
|geotropism
|geo-block
|geo-blocking
|geocontent
|geofence
|geofencing
|geo-imputation
|geomap
|geo-military
|geospatial
}}
jhkcwj9ttay1xg100fep5oabm4bxbaz
-metry
0
125622
193597
2024-11-21T11:28:27Z
Lee
19
'==English== ===Etymology=== From {{der|en|fro|-métrie}}, from {{der|en|la|-metria}}, from {{der|en|grc|-μετρία}}, from {{m|grc|μέτρον||[[measurement]]}} + {{m|grc|-ίᾱ||[[-y]]: ''[[form]]ing [[abstract noun]]s''}}. Equivalent to {{suffix|en|-meter|y}}. ===Suffix=== {{en-suffix}} # මිනුම් සහ මැනීම සම්බන්ධ නාම පද නිර්මාණය කරයි. [...' යොදමින් නව පිටුවක් තනන ලදි
193597
wikitext
text/x-wiki
==English==
===Etymology===
From {{der|en|fro|-métrie}}, from {{der|en|la|-metria}}, from {{der|en|grc|-μετρία}}, from {{m|grc|μέτρον||[[measurement]]}} + {{m|grc|-ίᾱ||[[-y]]: ''[[form]]ing [[abstract noun]]s''}}. Equivalent to {{suffix|en|-meter|y}}.
===Suffix===
{{en-suffix}}
# මිනුම් සහ මැනීම සම්බන්ධ නාම පද නිර්මාණය කරයි. [තහවුරු කර නොමැත]
====Derived terms====
{{suffixsee|en}}
* {{l|en|geometry}}
* {{l|en|isometry}}
* {{l|en|symmetry}}
* {{l|en|trigonometry}}
====Related terms====
* {{l|en|-meter}}
* {{l|en|-metric}}
bnqgmq7zl1mdymikw7scr5n077azqz9
gematria
0
125623
193598
2024-11-21T11:29:58Z
Lee
19
නිර්මාණය
193598
wikitext
text/x-wiki
==English==
===Etymology===
From {{der|en|arc|}}, from {{der|en|grc|γεωμετρία||geometry}}. {{doublet|en|geometry}}.
===Pronunciation===
* {{IPA|en|/ɡɪˈmeɪ.tɹi.ə/|/ɡɪˈmɑ.tɹi.ə/}}
===Noun===
{{en-noun|~|s|gematriot}}
# {{rfdef|en}}
====Hypernyms====
* {{l|en|numerology}}
====Derived terms====
* {{l|en|gematric}}
3pwpiuyltqclkcipez8za97exep9mci
geometria
0
125624
193599
2024-11-21T11:32:25Z
Lee
19
නිර්මාණය
193599
wikitext
text/x-wiki
{{also|geometría}}
==Latin==
{{wikipedia|lang=la}}
===Etymology===
From {{der|la|grc|γεωμετρία||geometry, land-survey}}, from {{m|grc|γεωμετρέω||to practice geometry, to measure or survey}}, [[back-formation]] from {{m|grc|γεωμέτρης||land measurer}}, from {{m|grc|γῆ||earth, land, country}} + {{m|grc|μετρέω||to measure, to count}} or {{m|grc|-μετρία||measurement}}, from {{m|grc|μέτρον||a measure}}.
===Pronunciation===
* {{la-IPA|geōmetria}}
===Noun===
{{la-noun|geōmetria<1>}}
# {{lb|la|mathematics}} [[geometry]]
====Declension====
{{la-ndecl|geōmetria<1>}}
====Related terms====
* {{l|la|geōmetrēs}}
* {{l|la|geōmetricus}}
====Descendants====
* {{desc|ast|xeometría}}
* {{desc|ca|geometria}}
* {{desc|en|geometry}}
* {{desc|eo|geometrio}}
* {{desc|gl|xeometría}}
* {{desc|hu|geometria}}
* {{desc|it|geometria}}
* {{desc|scn|giumitrìa}}
* {{desc|es|geometría}}
* {{desc|pl|geometria}}
* {{desc|pt|geometria}}
* {{desc|sk|geometria}}
===References===
* {{R:L&S}}
* {{R:Elementary Lewis}}
* {{R:M&A}}
{{cln|la|terms prefixed with geo-|terms suffixed with -metria}}
{{c|la|Mathematics|Geometry|Surveying}}
mnwnir190veu6lfgoiv0crddx8ylj2z
geometriae
0
125625
193600
2024-11-21T11:32:43Z
Lee
19
නිර්මාණය
193600
wikitext
text/x-wiki
==Latin==
===Noun===
{{head|la|noun form|head=geōmetriae}}
# {{inflection of|la|geōmetria||nom//voc|p|;|gen//dat|s}}
9cwsp1fnfmwjxnupcmsv3gm2sqcjl36
ප්රවර්ගය:ලතින් යෙදුම්, geo- උපසර්ග සහිත
14
125626
193601
2023-08-28T07:33:41Z
en>WingerBot
0
Created page with "{{auto cat}}"
193601
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193602
193601
2024-11-21T11:33:24Z
Lee
19
[[:en:Category:Latin_terms_prefixed_with_geo-]] වෙතින් එක් සංශෝධනයක්
193601
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
193603
193602
2024-11-21T11:34:05Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Latin terms prefixed with geo-]] සිට [[ප්රවර්ගය:ලතින් terms prefixed with geo-]] වෙත පිටුව ගෙන යන ලදී
193601
wikitext
text/x-wiki
{{auto cat}}
eomzlm5v4j7ond1phrju7cnue91g5qx
ප්රවර්ගය:Latin terms prefixed with geo-
14
125627
193604
2024-11-21T11:34:06Z
Lee
19
Lee විසින් [[ප්රවර්ගය:Latin terms prefixed with geo-]] සිට [[ප්රවර්ගය:ලතින් terms prefixed with geo-]] වෙත පිටුව ගෙන යන ලදී
193604
wikitext
text/x-wiki
#යළියොමුව [[:ප්රවර්ගය:ලතින් terms prefixed with geo-]]
d9e0ac21be5ohskvtnk3l475xziq4c3
සැකිල්ල:prefixsee
10
125628
193606
2023-07-14T00:59:36Z
en>WingerBot
0
[[Module:compound]] and dependencies renamed to [[Module:affix]] (manually assisted)
193606
wikitext
text/x-wiki
{{#invoke:affix/templates|derivsee|derivtype=prefix}}<!--
--><noinclude>
{{documentation}}
[[Category:Internal link templates]]
</noinclude>
i3j5d62g5w75pgy3ionris60f06bttl
193607
193606
2024-11-21T11:58:36Z
Lee
19
[[:en:Template:prefixsee]] වෙතින් එක් සංශෝධනයක්
193606
wikitext
text/x-wiki
{{#invoke:affix/templates|derivsee|derivtype=prefix}}<!--
--><noinclude>
{{documentation}}
[[Category:Internal link templates]]
</noinclude>
i3j5d62g5w75pgy3ionris60f06bttl