Personal tools

Tips are welcome

We accept paypal or credit card
$ 
But Bitcoins are awesome :-)
So is XRP, NMC & LTC



Bad Windows and Ubuntu.svg

MediaWiki code snippets

From Organic Design
Revision as of 01:54, 13 January 2012 by Nad (Talk | contribs)

Jump to: navigation, search

Contents

Getting in to MediaWiki coding

Here's a useful checklist of things to have set up before starting with MediaWiki development; see also MW:How to debug and MW:Developer hub.

  • Full browser and shell access to a running instance of every major version
  • Local access to the source code for all those versions
  • Use wfDebugDieBacktrace() to stop code and report the call stack
  • A good text editor with regular expression support file search capability (we use Geany which runs on most platforms)
  • A wikia setup so that extension code changes can be quickly tested in all versions and different extension versions can be quickly swapped
  • Tools or code for being able to stop the php code and output the necessary items in the scope
  • Tools for debugging the JS and CSS environment (eg. firebug)
  • Bookmarks for all the documentation you've found useful
  • Bookmarks to code snippets and examples (within your own wiki, http://mediawiki.org, other sites and in the mediawiki code)
  • Your own repository of snippets for achieving common objectives in the mediawiki runtime environment

Articles

Get article content

 
$title    = Title::newFromText($titleText);
$article  = new Article($title);
$wikitext = $article->getContent();
 

Edit or create an article

Editing or creating an article are both achieved using the Article::doEdit method which takes three parameters (the third is optional). The first two are the text content and edit summary. The third optional parameter is for flags which adjust the operation. The available flags and their meaning are listed below followed by example usage.

  • EDIT_NEW: Article is known or assumed to be non-existent, create a new one
  • EDIT_UPDATE: Article is known or assumed to be pre-existing, update it
  • EDIT_MINOR: Mark this edit minor, if the user is allowed to do so
  • EDIT_SUPPRESS_RC: Do not log the change in recentchanges
  • EDIT_FORCE_BOT: Mark the edit a "bot" edit regardless of user rights
  • EDIT_DEFER_UPDATES: Defer some of the updates until the end of index.php
  • EDIT_AUTOSUMMARY: Fill in blank summaries with generated text where possible
 
$title    = Title::newFromText($titleText);
$article  = new Article($title);
$article->doEdit($text, $summary, EDIT_UPDATE|EDIT_MINOR);
 
  • Note: $wgUser must be set before calling this function.

Image URLs

Given the name of an image page, obtain the full URL of the image.

 
$title = Title::newFromText($v, NS_IMAGE);
$image = Image::newFromTitle($title);
if ($image && $image->exists()) {
	$url      = $image->getURL();                      # Gets the URL for the image at its normal size
	$url_50px = $image->getThumbnail(50,50)->getUrl(); # Gets the URL for the image at a specified size
}
 

Image links

Sometimes you might want to prevent images from linking to the image page. This extension function removes the link.

 
$wgHooks['OutputPageBeforeHTML'][] = 'removeImageLinks';
function removeImageLinks( &$out, &$text ) {
	$text = preg_replace( '|<a.+?class=.image.+?>.*?(<img.+?>).*?</a>|i', '$1', $text );
	return true;
}
 

Adding stub code to newly created articles

This will add {{stub}} to the beginning of article content when it is first created. It can then be removed on a subsequent edit if desired.

 
$wgHooks['ArticleSave'][] = 'wgAddStub';
function wgAddStub( &$article, &$user, &$text, &$summary, $minor, $watchthis, $sectionanchor, &$flags, &$status ) {
	$text = ( $article->exists() ? "" : "{{stub}}\n" ) . $text;
	return true;
}
 

Security

Probably the most common security-related LocalSettings hack is to allow pages in locked down wikis to be publicly viewable if they're in a particular category. This following example adds articles in Category:Public to the $wgWhitelistRead array.

 
$wgHooks['UserGetRights'][] = 'wfPublicCat';
function wfPublicCat() {
	global $wgWhitelistRead;
	$title = Title::newFromText( $_REQUEST['title'] );
	if( is_object( $title ) ) {
		$id   = $title->getArticleID();
		$dbr  = wfGetDB( DB_SLAVE );
		$cat  = $dbr->addQuotes( 'Public' );
		$cl   = $dbr->tableName( 'categorylinks' );
		if( $dbr->selectRow( $cl, '0', "cl_from = $id AND cl_to = $cat" ) ) $wgWhitelistRead[] = $title->getPrefixedText();
	}
	return true;
}
 


From MediaWiki 1.17 onwards, the OutputPageBodyAttributes hook can be used to modify the classes and other attributed of the body tag. In the following example the hook is being used to add classes to the body tag depending on whether the user is anonymous, logged-in or a sysop.

 
$wgHooks['OutputPageBodyAttributes'][] = 'wfAddBodyClasses';
function wfAddBodyClasses( $out, $sk, $bodyAttrs ) {
	global $wgUser;
	if( $wgUser->isAnon() ) $bodyAttrs['class'] .= ' anon';
	if( $wgUser->isLoggedIn() ) $bodyAttrs['class'] .= ' user';
	if( in_array( 'sysop', $wgUser->getEffectiveGroups() ) ) $bodyAttrs['class'] .= ' sysop';
	return true;
}
 


Here's an example which was created in response to a support-desk question. It prevents users from editing other users user-pages:

 
$wgExtensionFunctions[] = 'wfProtectUserPages';
function wfProtectUserPages() {
	global $wgUser,$wgGroupPermissions;
	$title = Title::newFromText( $_REQUEST['title'] );
	if( is_object( $title ) && $title->getNamespace() == NS_USER && $wgUser->getName() != $title->getText() )
		$wgGroupPermissions['user']['edit'] = false;
}
 


Here is a similar example which restricts all namespaces except MAIN from anonymous users:

 
$wgExtensionFunctions[] = 'wfProtectNamespaces';
$wgWhitelistRead = array('Special:Userlogin', '-', 'MediaWiki:Monobook.css');
function wfProtectNamespaces() {
	global $wgUser,$wgGroupPermissions;
	$title = Title::newFromText($_REQUEST['title']);
	if (is_object($title) && $title->getNamespace() != 0 && $wgUser->isAnon())
		$wgGroupPermissions['*']['read'] = false;
}
 


The following snippet causes all requests to be redirected to HTTPS if the user is a sysop. It also bounces users to non-HTTPS if not a sysop which can be useful if you don't want HTTPS links showing up in search engines for example if you have a self-signed certificate (it doesn't do this if the user is on the login page though since that confuses our bots that use HTTPS connections):

 
# Force HTTPS for sysops (and non-HTTPS for non-sysops)
$wgExtensionFunctions[] = 'wfSecureSysops';
function wfSecureSysops() {
		global $wgUser;
		if( in_array( 'sysop', $wgUser->getEffectiveGroups() ) ) {
				if( !isset( $_SERVER['HTTPS'] ) ) {
						header( "Location: https://" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'] );
						exit;
				}
		} else {
				if( isset( $_SERVER['HTTPS'] ) && ( !array_key_exists( 'title', $_REQUEST ) || $_REQUEST['title'] != 'Special:UserLogin' ) ) {
						header( "Location: http://" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'] );
						exit;
				}
		}
}
 


The following code checks if a request is local, and works regardless of the IP address or whether it's IPv4 or IPv6.

 
function isLocal() {
	return preg_match_all( "|inet6? addr:\s*([0-9a-f.:]+)|", `/sbin/ifconfig`, $matches ) && in_array( $_SERVER['REMOTE_ADDR'], $matches[1] );
}
 

Returning content to the client

Raw wikitext content

This function returns the raw content specified in $text, it can be called any time from extension-setup or after. If the $save parameter is supplied it will bring up a download dialog with the default name set to $save, otherwise it will download and open unprompted. If $expand is set to true, then any templates, parser-functions or variables in the content will be expanded.


 
function raw($text, $expand = false, $save = false) {
	global $wgOut, $wgParser;
	if ($expand) $text = $wgParser->preprocess($text, new Title(), new ParserOptions());
	$wgOut->disable();
	wfResetOutputBuffers();
	header('Content-Type: application/octet-stream');
	if ($save) header("Content-Disposition: attachment; filename=\"$save\"");
	echo $text;
}
 

Return an HTTP error page

 
global $wgOut, $wgParser;
$wgOut->disable();
wfResetOutputBuffers();
header('HTTP/1.0 404 Not Found');
$err = '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>404 Not Found</title></head>
	<body><h1>Not Found</h1><p>The requested URL was not found on this server.</p></body></html>';
echo $err;
 

Domain-based default redirect

If no page title is specified, redirect to a default depending on the domain name in the requested URL. In this example requests to abc.org or any of it's subdomains with no title specified will redirect to the Welcome to ABC article, and any requests to the exact domain of www.xyz.org without a title end up at the XYZ Home article. Titleless requests to any other domain which resolves to this example wiki will be unaffected and left to the rest of the configuration to deal with. This code should be executed early in the LocalSettings before any extensions are included, but after $wgServer is defined.

 
$d = $_SERVER['SERVER_NAME'];
$t = $_REQUEST['title'];
if (empty($t)) $t = ereg_replace('^/', '', $_SERVER['PATH_INFO']);
if (empty($t) || $t == 'Main_Page') {
	if (ereg('abc.org$', $d)) header("Location: $wgServer/Welcome_to_ABC") && die;
	if ($d == 'www.xyz.com')  header("Location: $wgServer/XYZ_Home")       && die;
}
 

Add a meta tags to the page

 
$wgExtensionFunctions[] = 'wfAddMetaTag';
function wfAddMetaTag() {
	global $wgOut;
	$wgOut->addMeta('name1', 'value1');
	$wgOut->addMeta('name2', 'value2');
}
 

Using the parser

Parse wikitext

 
$html = $wgParser->parse($wikitext, $title, new ParserOptions(), true, true)->getText();
 

Expand templates only

 
$wikitext = $wgParser->preprocess($wikitext, $title, new ParserOptions());
 

Replace triple-brace arguments in wikitext content

 
$parser->replaceVariables($wikitext, $args, true);
 

If $wikitext is set to "hello {{{you}}} this is {{{me}}}", then the "you" and "me" keys of the $args array will replace the corresponding triple-brace arguments in the wikitext content. This is a useful method to know because there are actually a number of difficulties involved in implementing it since it must account for both named and ordered parameters and default values (which can also contain brace-expressions).

Using named parameters from a parser-function callback

You can use named parameters in your parser functions, eg {{#drops:for=Rikkukin the Defender|zone=The Ascent|h=3}}, but you will need to manually split the args into key/value pairs in your callback function, such as in the following example code:

 
$args = array();
foreach ($argv as $arg)
	if (!is_object($arg))
		preg_match('/^(\\w+)\\s*=\\s*(.+)$/is', $arg, $match) ? $args[$match[1]] = $match[2] : $args[] = $arg;
 

This snippet will create a hash from the arguments passed to your callback function (ignoring any which are objects such as the first one which is $parser). The resulting hash will contain numeric keys for all the normal non-named parameters in your parser-function, and non-numeric keys matching all the name=value parameters.

Return codes for parser functions

Typical example.

 
return array(
	$text,
	'found'   => true,
	'nowiki'  => false,
	'noparse' => false,
	'noargs'  => false,
	'isHTML'  => false
);
 

See also: http://www.organicdesign.co.nz/Extension:Example

Protecting raw HTML output from modification by MediaWiki

See: How can I avoid modification of my extension's HTML output on mediawiki.org. Thanks to Duesentrieb for digging out this info.

Post processing to remove unwanted breaks and entities from output

 
function efFixEmptyLineBug(&$parser, &$text) {
	$text = preg_replace('|<p>\s*<br\s*/>\s*</p>|s', '', $text);
	return true;
}
 

JavaScript

Here's some JavaScript snippets that can be added to MediaWiki:Common.js

Make sortable table states persistent with cookies

Note that this function requires getCookie and setCookie which are currently in our navbar.js. They're just the standard cookie getting and setting functions recommended by W3C.

 
addOnloadHook( function() {
    $('.sortable').each( function() {
        var id = $(this).attr('id');
        document.shCookie = getCookie('sortheader-'+id);
        document.sortheaderId = 0;
        $('#'+id+' a.sortheader').each( function() {
            var id = $(this).parent().parent().parent().parent().attr('id');
            var sh = document.sortheaderId++;
            if( sh+100 == document.shCookie ) { ts_resortTable(this); ts_resortTable(this); }
            if( sh == document.shCookie ) { ts_resortTable(this); sh += 100; }
            $(this).bind('click', {id: id, sh: sh}, function(e) {
                setCookie('sortheader-'+e.data.id, e.data.sh, 1);
            });
        });
    });
});
 

MediaWiki Environment

Article title

This should be called at an appropriate time such as from the OutputPageBeforeHTML hook.

 
$wgOut->setPageTitle( 'foo' );
 

Article queries

Info.svg See MW:Manual:Database access for additional information about database access in MediaWiki.


List article titles from a category

 
$list = array();
$dbr  = &wfGetDB( DB_SLAVE );
$cl   = $dbr->tableName( 'categorylinks' );
$cat  = $dbr->addQuotes( Title::newFromText( $cat )->getDBkey() );
$res  = $dbr->select( $cl, 'cl_from', "cl_to = $cat", __METHOD__, array( 'ORDER BY' => 'cl_sortkey' ) );
while( $row = $dbr->fetchRow( $res ) ) $list[] = Title::newFromID( $row[0] )->getPrefixedText();
 

Adjust the $list addition to your own needs. This example creates a title object for each and then calls the getPrefixedText method which returns the title as a string including namespace.

List categories an article belongs to

 
$list = array();
$dbr  = &wfGetDB( DB_SLAVE );
$cl   = $dbr->tableName( 'categorylinks' );
$id   = Title::newFromText( 'article-title' )->getArticleID();
$res  = $dbr->select( $cl, 'cl_to', "cl_from = $id", __METHOD__, array('ORDER BY' => 'cl_sortkey' ) );
while( $row = $dbr->fetchRow( $res ) ) $list[] = $row[0];
$dbr->freeResult( $res );
 

Check if a title is in a category

 
function inCat( $title, $cat ) {
	if( !is_object( $title ) ) $title = Title::newFromText( $title );
	$id   = $title->getArticleID();
	$dbr  = &wfGetDB( DB_SLAVE );
	$cat  = $dbr->addQuotes( Title::newFromText( $cat )->getDBkey() );
	$cl   = $dbr->tableName( 'categorylinks' );
	return $dbr->selectRow( $cl, '0', "cl_from = $id AND cl_to = $cat", __METHOD__ );
}
 

Get a list of articles in a namespace (number)

 
function getArticlesInNamespace( $namespace ) {
	$dbr   = &wfGetDB(DB_SLAVE);
	$list  = array();
	$table = $dbr->tableName( 'page' );
	$res   = $dbr->select( $table, 'page_title', "page_namespace = $namespace" );
	while( $row = $dbr->fetchRow( $res ) ) $list[] = $row[0];
	$dbr->freeResult( $res );
	return $list;
}
 

Get a list of articles using a template

 
function usesTemplate( $tmpl ) {
	$dbr   = &wfGetDB(DB_SLAVE);
	$list  = array();
	$tmpl  = $dbr->addQuotes( Title::newFromText( $tmpl )->getDBkey() );
	$table = $dbr->tableName( 'templatelinks' );
	$res   = $dbr->select( $table, 'tl_from', "tl_namespace = 10 AND tl_title = $tmpl" );
	while( $row = $dbr->fetchRow( $res ) ) $list[] = $row[0];
	$dbr->freeResult( $res );
	return $list;
}
 

Remove category pages for empty categories

 
function removeEmptyCategories() {
 
	// Get all category pages
	$dbr   = &wfGetDB(DB_SLAVE);
	$cats  = array();
	$table = $dbr->tableName( 'page' );
	$res   = $dbr->select( $table, 'page_title', "page_namespace = " . NS_CATEGORY );
	while( $row = $dbr->fetchRow( $res ) ) $cats[] = $row[0];
	$dbr->freeResult( $res );
 
	// Delete those which are for empty cats
	foreach( $cats as $cat ) {
		$dbr  = &wfGetDB( DB_SLAVE );
		$cl   = $dbr->tableName( 'categorylinks' );
		$qcat = $dbr->addQuotes( Title::newFromText( $cat )->getDBkey() );
		if( !$dbr->selectRow( $cl, 'cl_from', "cl_to = $qcat" ) ) {
			$title = Title::newFromText( $cat, NS_CATEGORY );
			$article = new Article( $title );
			$article->doDelete( "Obsolete category page - this category contains no items" );
		}
	}
}
 

Article queries using DPL

This DPL query example provides a standard list of page links in wikitext bullet list format.

{{#dpl:category=Foo|format=,*[[%PAGE%]],\n,}}

Misc

examineBraces

This function returns an array of the brace structure found in the passed wikitext parameter. This has also been implemented in Perl, see the wikiExamineBraces function in wiki.pl.

 
function examineBraces( &$content ) {
	$braces = array();
	$depths = array();
	$depth = 1;
	$index = 0;
	while( preg_match( '/\\{\\{\\s*([#a-z0-9_]*:?)|\\}\\}/is', $content, $match, PREG_OFFSET_CAPTURE, $index ) ) {
		$index = $match[0][1] + 2;
		if( $match[0][0] == '}}' ) {
			$brace =& $braces[$depths[$depth-1]];
			$brace[LENGTH] = $match[0][1] - $brace[OFFSET] + 2;
			$brace[DEPTH]  = $depth--;
		}
		else {
			$depths[$depth++] = count( $braces );
			$braces[] = array(
				NAME   => $match[1][0],
				OFFSET => $match[0][1]
			);
		}
	}
	return $braces;
}
 


The following input,

foo{{#bar:baz|biz{{foo|shmoo}}}}{{moo}}baz


Gives the following array:

Array(
    [0] => Array(
        [NAME]   => #bar
        [OFFSET] => 3
        [LENGTH] => 29
        [DEPTH]  => 1
        )

    [1] => Array(
        [NAME]   => foo
        [OFFSET] => 17
        [LENGTH] => 13
        [DEPTH]  => 2
        )

    [2] => Array(
        [NAME]   => moo
        [OFFSET] => 32
        [LENGTH] => 7
        [DEPTH]  => 1
        )
    )

The array output is designed to integrate with the substr_replace function arguments subject, replace, offset, length. The index 0, 1, 2 order referes to the order functions are found (from left to right).

Google Analytics

 
$wgExtensionFunctions[] = 'wfGoogleAnalytics';
function wfGoogleAnalytics() {
	global $wgOut;
	$wgOut->addScript(
		'<script type="text/javascript">
		var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
		document.write(unescape("%3Cscript src=\'" + gaJsHost + "google-analytics.com/ga.js\' type=\'text/javascript\'%3E%3C/script%3E"));
		</script><script type="text/javascript">
		var pageTracker = _gat._getTracker("INSERT YOUR TRACKING CODE HERE");
		pageTracker._trackPageview();</script>'
	);
}
 

Force all headings to use outline numbering

There is a user-preference to make all headings use outline numbering, but no way to make that a default for all users. Here's a few lines of code which can be added to your LocalSettings.php file which do that.

 
$wgExtensionFunctions[] = 'wfNumberHeadings';
function wfNumberHeadings() {
	global $wgUser;
	$wgUser->setOption('numberheadings', true);
}
 

Integrating with the Skin

Adding a new action tab

The following code adds a new action tab with a corresponding link. To process the new action, add a processing function to the UnknownAction hook.

 
$wgExampleAction = 'example';
 
$wgHooks['SkinTemplateTabs'][] = 'wfAddExampleTab';
 
function wfAddExampleTab( &$skin, &$actions ) {
	global $wgTitle, $wgRequest, $wgExampleAction;
	$selected = $wgRequest->getText( 'action' ) == $wgExampleAction ? 'selected' : false;
	$url      = $wgTitle->getLocalURL( "action=$wgExampleAction" );
	if (is_object( $wgTitle ) ) {
		$actions[$wgExampleAction] = array(
			'text'  => $wgExampleAction, # should use wfMsg( $wgExampleAction )
			'class' => $selected,
			'href'  => $url
		);
	}
	return true;
}
 


The Vector skin (optional on 1.16, default on 1.17 onwards) uses a new hook, SkinTemplateNavigation. You can cover both Vector and earlier skins like Monobook or Modern by including code for both hooks. The example below uses the MediaWiki namespace to name the relevant elements of the array. You can put this code in LocalSettings after the inclusion of extensions.

 
$wgHooks['SkinTemplateTabs'][] = 'onSkinTemplateTabs';
$wgHooks['SkinTemplateNavigation'][] = 'onSkinTemplateNavigation';
 
function onSkinTemplateTabs( $skin, &$actions ) {
	$actions[wfMsg( 'edit-addtab-id' )] = array(
		'class' => wfMsg( 'edit-addtab-class' ),
		'text' => wfMsg( 'edit-addtab-text' ),
		'href' => wfMsg( 'edit-addtab-href' )
	);		
	return true;
}
 
function onSkinTemplateNavigation( &$skin, &$contentActions ) {
	$contentActions['views'][wfMsg( 'edit-addtab-id' )] = array(
		'class' => wfMsg( 'edit-addtab-class' ),
		'text' => wfMsg( 'edit-addtab-text' ),
		'href' => wfMsg( 'edit-addtab-href' )
	);
	return true;
}
 

Wikitext in Sidebar

By default the MediaWiki:Sidebar article content is not normal wikitext. This snippet fixes that and also ensures that the content can fall back to an i18n message if the article is not present - the normal behaviour for articles in the MediaWiki namespace. Replace a section such as toolbox or navlinks in the skins/MonoBook.php file with the following:

 
global $wgUser, $wgTitle, $wgParser;
$title = 'sidebar'; # note the lcfirst is important here since it's also a msg key
$article = new Article( Title::newFromText( $title, NS_MEDIAWIKI ) );
$text = $article->fetchContent();
if ( empty( $text ) ) $text = wfMsg( $title );
if ( is_object( $wgParser ) ) { $psr = $wgParser; $opt = $wgParser->mOptions; }
else { $psr = new Parser; $opt = NULL; }
if ( !is_object( $opt ) ) $opt = ParserOptions::newFromUser( $wgUser );
echo $psr->parse( $text, $wgTitle, $opt, true, true )->getText();
 


The procedure is the same with Vector-based skins, the change is you now place the code inside divs of id portal and then id body, as per the other examples in skins/Vector.php. It should replace the following code and/or the toolbox:

 
<?php $this->renderPortals( $this->data['sidebar'] ); ?>
 

Development server identification

The idea is to have a picture or text that allows a developer to quickly see that they are on the development instance and not the production instance of MediaWiki. By adding to the content of $wgSiteNotice in a conditional every article will have the change for that domain map.

 
if( $wgServer == "http://localhost" )  # or any other domain
  $wgSiteNotice = '<span style="position:absolute;top:0px;left:-140px;z-index:10">[[Image:Gnome-devel.png|100px]]</span>';
 

This approach was based on W:User:East718/include, and the image source is Image:Gnome-devel.svg

Importing and Exporting articles

The following snippet imports articles from the XML export file named in $file into the main namespace. The $resultCount variable holds the number of articles imported on success.

 
$source = ImportStreamSource::newFromFile( $file );
$importer = new WikiImporter( $source );
$importer->setTargetNamespace( NS_MAIN );
$reporter = new ImportReporter( $importer, false, false, false );
$reporter->open();
$result = $importer->doImport();
$resultCount = $reporter->close();
if( WikiError::isError( $result ) ) die( $result->getMessage() );
elseif( WikiError::isError( $resultCount ) ) die( $resultCount->getMessage() );
 


This one exports the current revision of all articles whose names are contained in the $pages array to the file named in $file.

 
$dbr = wfGetDB( DB_SLAVE );
$exporter = new WikiExporter( $dbr, WikiExporter::CURRENT, WikiExporter::BUFFER );
$exporter->list_authors = false;
$exporter->sink = new DumpFileOutput( $file );
$exporter->openStream();
foreach( $pages as $page ) {
	$title = Title::newFromText( $page );
	$exporter->pageByTitle( $title );
}
$exporter->closeStream();
fclose( $exporter->sink->handle );
 

See also

Contents

Getting in to MediaWiki coding

Here's a useful checklist of things to have set up before starting with MediaWiki development; see also MW:How to debug and MW:Developer hub.

  • Full browser and shell access to a running instance of every major version
  • Local access to the source code for all those versions
  • Use wfDebugDieBacktrace() to stop code and report the call stack
  • A good text editor with regular expression support file search capability (we use Geany which runs on most platforms)
  • A wikia setup so that extension code changes can be quickly tested in all versions and different extension versions can be quickly swapped
  • Tools or code for being able to stop the php code and output the necessary items in the scope
  • Tools for debugging the JS and CSS environment (eg. firebug)
  • Bookmarks for all the documentation you've found useful
  • Bookmarks to code snippets and examples (within your own wiki, http://mediawiki.org, other sites and in the mediawiki code)
  • Your own repository of snippets for achieving common objectives in the mediawiki runtime environment

Articles

Get article content

 
$title    = Title::newFromText($titleText);
$article  = new Article($title);
$wikitext = $article->getContent();
 

Edit or create an article

Editing or creating an article are both achieved using the Article::doEdit method which takes three parameters (the third is optional). The first two are the text content and edit summary. The third optional parameter is for flags which adjust the operation. The available flags and their meaning are listed below followed by example usage.

  • EDIT_NEW: Article is known or assumed to be non-existent, create a new one
  • EDIT_UPDATE: Article is known or assumed to be pre-existing, update it
  • EDIT_MINOR: Mark this edit minor, if the user is allowed to do so
  • EDIT_SUPPRESS_RC: Do not log the change in recentchanges
  • EDIT_FORCE_BOT: Mark the edit a "bot" edit regardless of user rights
  • EDIT_DEFER_UPDATES: Defer some of the updates until the end of index.php
  • EDIT_AUTOSUMMARY: Fill in blank summaries with generated text where possible
 
$title    = Title::newFromText($titleText);
$article  = new Article($title);
$article->doEdit($text, $summary, EDIT_UPDATE|EDIT_MINOR);
 
  • Note: $wgUser must be set before calling this function.

Image URLs

Given the name of an image page, obtain the full URL of the image.

 
$title = Title::newFromText($v, NS_IMAGE);
$image = Image::newFromTitle($title);
if ($image && $image->exists()) {
	$url      = $image->getURL();                      # Gets the URL for the image at its normal size
	$url_50px = $image->getThumbnail(50,50)->getUrl(); # Gets the URL for the image at a specified size
}
 

Image links

Sometimes you might want to prevent images from linking to the image page. This extension function removes the link.

 
$wgHooks['OutputPageBeforeHTML'][] = 'removeImageLinks';
function removeImageLinks( &$out, &$text ) {
	$text = preg_replace( '|<a.+?class=.image.+?>.*?(<img.+?>).*?</a>|i', '$1', $text );
	return true;
}
 

Adding stub code to newly created articles

This will add {{stub}} to the beginning of article content when it is first created. It can then be removed on a subsequent edit if desired.

 
$wgHooks['ArticleSave'][] = 'wgAddStub';
function wgAddStub( &$article, &$user, &$text, &$summary, $minor, $watchthis, $sectionanchor, &$flags, &$status ) {
	$text = ( $article->exists() ? "" : "{{stub}}\n" ) . $text;
	return true;
}
 

Security

Probably the most common security-related LocalSettings hack is to allow pages in locked down wikis to be publicly viewable if they're in a particular category. This following example adds articles in Category:Public to the $wgWhitelistRead array.

 
$wgHooks['UserGetRights'][] = 'wfPublicCat';
function wfPublicCat() {
	global $wgWhitelistRead;
	$title = Title::newFromText( $_REQUEST['title'] );
	if( is_object( $title ) ) {
		$id   = $title->getArticleID();
		$dbr  = wfGetDB( DB_SLAVE );
		$cat  = $dbr->addQuotes( 'Public' );
		$cl   = $dbr->tableName( 'categorylinks' );
		if( $dbr->selectRow( $cl, '0', "cl_from = $id AND cl_to = $cat" ) ) $wgWhitelistRead[] = $title->getPrefixedText();
	}
	return true;
}
 


From MediaWiki 1.17 onwards, the OutputPageBodyAttributes hook can be used to modify the classes and other attributed of the body tag. In the following example the hook is being used to add classes to the body tag depending on whether the user is anonymous, logged-in or a sysop.

 
$wgHooks['OutputPageBodyAttributes'][] = 'wfAddBodyClasses';
function wfAddBodyClasses( $out, $sk, $bodyAttrs ) {
	global $wgUser;
	if( $wgUser->isAnon() ) $bodyAttrs['class'] .= ' anon';
	if( $wgUser->isLoggedIn() ) $bodyAttrs['class'] .= ' user';
	if( in_array( 'sysop', $wgUser->getEffectiveGroups() ) ) $bodyAttrs['class'] .= ' sysop';
	return true;
}
 


Here's an example which was created in response to a support-desk question. It prevents users from editing other users user-pages:

 
$wgExtensionFunctions[] = 'wfProtectUserPages';
function wfProtectUserPages() {
	global $wgUser,$wgGroupPermissions;
	$title = Title::newFromText( $_REQUEST['title'] );
	if( is_object( $title ) && $title->getNamespace() == NS_USER && $wgUser->getName() != $title->getText() )
		$wgGroupPermissions['user']['edit'] = false;
}
 


Here is a similar example which restricts all namespaces except MAIN from anonymous users:

 
$wgExtensionFunctions[] = 'wfProtectNamespaces';
$wgWhitelistRead = array('Special:Userlogin', '-', 'MediaWiki:Monobook.css');
function wfProtectNamespaces() {
	global $wgUser,$wgGroupPermissions;
	$title = Title::newFromText($_REQUEST['title']);
	if (is_object($title) && $title->getNamespace() != 0 && $wgUser->isAnon())
		$wgGroupPermissions['*']['read'] = false;
}
 


The following snippet causes all requests to be redirected to HTTPS if the user is a sysop. It also bounces users to non-HTTPS if not a sysop which can be useful if you don't want HTTPS links showing up in search engines for example if you have a self-signed certificate (it doesn't do this if the user is on the login page though since that confuses our bots that use HTTPS connections):

 
# Force HTTPS for sysops (and non-HTTPS for non-sysops)
$wgExtensionFunctions[] = 'wfSecureSysops';
function wfSecureSysops() {
		global $wgUser;
		if( in_array( 'sysop', $wgUser->getEffectiveGroups() ) ) {
				if( !isset( $_SERVER['HTTPS'] ) ) {
						header( "Location: https://" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'] );
						exit;
				}
		} else {
				if( isset( $_SERVER['HTTPS'] ) && ( !array_key_exists( 'title', $_REQUEST ) || $_REQUEST['title'] != 'Special:UserLogin' ) ) {
						header( "Location: http://" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'] );
						exit;
				}
		}
}
 


The following code checks if a request is local, and works regardless of the IP address or whether it's IPv4 or IPv6.

 
function isLocal() {
	return preg_match_all( "|inet6? addr:\s*([0-9a-f.:]+)|", `/sbin/ifconfig`, $matches ) && in_array( $_SERVER['REMOTE_ADDR'], $matches[1] );
}
 

Returning content to the client

Raw wikitext content

This function returns the raw content specified in $text, it can be called any time from extension-setup or after. If the $save parameter is supplied it will bring up a download dialog with the default name set to $save, otherwise it will download and open unprompted. If $expand is set to true, then any templates, parser-functions or variables in the content will be expanded.


 
function raw($text, $expand = false, $save = false) {
	global $wgOut, $wgParser;
	if ($expand) $text = $wgParser->preprocess($text, new Title(), new ParserOptions());
	$wgOut->disable();
	wfResetOutputBuffers();
	header('Content-Type: application/octet-stream');
	if ($save) header("Content-Disposition: attachment; filename=\"$save\"");
	echo $text;
}
 

Return an HTTP error page

 
global $wgOut, $wgParser;
$wgOut->disable();
wfResetOutputBuffers();
header('HTTP/1.0 404 Not Found');
$err = '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>404 Not Found</title></head>
	<body><h1>Not Found</h1><p>The requested URL was not found on this server.</p></body></html>';
echo $err;
 

Domain-based default redirect

If no page title is specified, redirect to a default depending on the domain name in the requested URL. In this example requests to abc.org or any of it's subdomains with no title specified will redirect to the Welcome to ABC article, and any requests to the exact domain of www.xyz.org without a title end up at the XYZ Home article. Titleless requests to any other domain which resolves to this example wiki will be unaffected and left to the rest of the configuration to deal with. This code should be executed early in the LocalSettings before any extensions are included, but after $wgServer is defined.

 
$d = $_SERVER['SERVER_NAME'];
$t = $_REQUEST['title'];
if (empty($t)) $t = ereg_replace('^/', '', $_SERVER['PATH_INFO']);
if (empty($t) || $t == 'Main_Page') {
	if (ereg('abc.org$', $d)) header("Location: $wgServer/Welcome_to_ABC") && die;
	if ($d == 'www.xyz.com')  header("Location: $wgServer/XYZ_Home")       && die;
}
 

Add a meta tags to the page

 
$wgExtensionFunctions[] = 'wfAddMetaTag';
function wfAddMetaTag() {
	global $wgOut;
	$wgOut->addMeta('name1', 'value1');
	$wgOut->addMeta('name2', 'value2');
}
 

Using the parser

Parse wikitext

 
$html = $wgParser->parse($wikitext, $title, new ParserOptions(), true, true)->getText();
 

Expand templates only

 
$wikitext = $wgParser->preprocess($wikitext, $title, new ParserOptions());
 

Replace triple-brace arguments in wikitext content

 
$parser->replaceVariables($wikitext, $args, true);
 

If $wikitext is set to "hello {{{you}}} this is {{{me}}}", then the "you" and "me" keys of the $args array will replace the corresponding triple-brace arguments in the wikitext content. This is a useful method to know because there are actually a number of difficulties involved in implementing it since it must account for both named and ordered parameters and default values (which can also contain brace-expressions).

Using named parameters from a parser-function callback

You can use named parameters in your parser functions, eg {{#drops:for=Rikkukin the Defender|zone=The Ascent|h=3}}, but you will need to manually split the args into key/value pairs in your callback function, such as in the following example code:

 
$args = array();
foreach ($argv as $arg)
	if (!is_object($arg))
		preg_match('/^(\\w+)\\s*=\\s*(.+)$/is', $arg, $match) ? $args[$match[1]] = $match[2] : $args[] = $arg;
 

This snippet will create a hash from the arguments passed to your callback function (ignoring any which are objects such as the first one which is $parser). The resulting hash will contain numeric keys for all the normal non-named parameters in your parser-function, and non-numeric keys matching all the name=value parameters.

Return codes for parser functions

Typical example.

 
return array(
	$text,
	'found'   => true,
	'nowiki'  => false,
	'noparse' => false,
	'noargs'  => false,
	'isHTML'  => false
);
 

See also: http://www.organicdesign.co.nz/Extension:Example

Protecting raw HTML output from modification by MediaWiki

See: How can I avoid modification of my extension's HTML output on mediawiki.org. Thanks to Duesentrieb for digging out this info.

Post processing to remove unwanted breaks and entities from output

 
function efFixEmptyLineBug(&$parser, &$text) {
	$text = preg_replace('|<p>\s*<br\s*/>\s*</p>|s', '', $text);
	return true;
}
 

JavaScript

Here's some JavaScript snippets that can be added to MediaWiki:Common.js

Make sortable table states persistent with cookies

Note that this function requires getCookie and setCookie which are currently in our navbar.js. They're just the standard cookie getting and setting functions recommended by W3C.

 
addOnloadHook( function() {
    $('.sortable').each( function() {
        var id = $(this).attr('id');
        document.shCookie = getCookie('sortheader-'+id);
        document.sortheaderId = 0;
        $('#'+id+' a.sortheader').each( function() {
            var id = $(this).parent().parent().parent().parent().attr('id');
            var sh = document.sortheaderId++;
            if( sh+100 == document.shCookie ) { ts_resortTable(this); ts_resortTable(this); }
            if( sh == document.shCookie ) { ts_resortTable(this); sh += 100; }
            $(this).bind('click', {id: id, sh: sh}, function(e) {
                setCookie('sortheader-'+e.data.id, e.data.sh, 1);
            });
        });
    });
});
 

MediaWiki Environment

Article title

This should be called at an appropriate time such as from the OutputPageBeforeHTML hook.

 
$wgOut->setPageTitle( 'foo' );
 

Article queries

Info.svg See MW:Manual:Database access for additional information about database access in MediaWiki.


List article titles from a category

 
$list = array();
$dbr  = &wfGetDB( DB_SLAVE );
$cl   = $dbr->tableName( 'categorylinks' );
$cat  = $dbr->addQuotes( Title::newFromText( $cat )->getDBkey() );
$res  = $dbr->select( $cl, 'cl_from', "cl_to = $cat", __METHOD__, array( 'ORDER BY' => 'cl_sortkey' ) );
while( $row = $dbr->fetchRow( $res ) ) $list[] = Title::newFromID( $row[0] )->getPrefixedText();
 

Adjust the $list addition to your own needs. This example creates a title object for each and then calls the getPrefixedText method which returns the title as a string including namespace.

List categories an article belongs to

 
$list = array();
$dbr  = &wfGetDB( DB_SLAVE );
$cl   = $dbr->tableName( 'categorylinks' );
$id   = Title::newFromText( 'article-title' )->getArticleID();
$res  = $dbr->select( $cl, 'cl_to', "cl_from = $id", __METHOD__, array('ORDER BY' => 'cl_sortkey' ) );
while( $row = $dbr->fetchRow( $res ) ) $list[] = $row[0];
$dbr->freeResult( $res );
 

Check if a title is in a category

 
function inCat( $title, $cat ) {
	if( !is_object( $title ) ) $title = Title::newFromText( $title );
	$id   = $title->getArticleID();
	$dbr  = &wfGetDB( DB_SLAVE );
	$cat  = $dbr->addQuotes( Title::newFromText( $cat )->getDBkey() );
	$cl   = $dbr->tableName( 'categorylinks' );
	return $dbr->selectRow( $cl, '0', "cl_from = $id AND cl_to = $cat", __METHOD__ );
}
 

Get a list of articles in a namespace (number)

 
function getArticlesInNamespace( $namespace ) {
	$dbr   = &wfGetDB(DB_SLAVE);
	$list  = array();
	$table = $dbr->tableName( 'page' );
	$res   = $dbr->select( $table, 'page_title', "page_namespace = $namespace" );
	while( $row = $dbr->fetchRow( $res ) ) $list[] = $row[0];
	$dbr->freeResult( $res );
	return $list;
}
 

Get a list of articles using a template

 
function usesTemplate( $tmpl ) {
	$dbr   = &wfGetDB(DB_SLAVE);
	$list  = array();
	$tmpl  = $dbr->addQuotes( Title::newFromText( $tmpl )->getDBkey() );
	$table = $dbr->tableName( 'templatelinks' );
	$res   = $dbr->select( $table, 'tl_from', "tl_namespace = 10 AND tl_title = $tmpl" );
	while( $row = $dbr->fetchRow( $res ) ) $list[] = $row[0];
	$dbr->freeResult( $res );
	return $list;
}
 

Remove category pages for empty categories

 
function removeEmptyCategories() {
 
	// Get all category pages
	$dbr   = &wfGetDB(DB_SLAVE);
	$cats  = array();
	$table = $dbr->tableName( 'page' );
	$res   = $dbr->select( $table, 'page_title', "page_namespace = " . NS_CATEGORY );
	while( $row = $dbr->fetchRow( $res ) ) $cats[] = $row[0];
	$dbr->freeResult( $res );
 
	// Delete those which are for empty cats
	foreach( $cats as $cat ) {
		$dbr  = &wfGetDB( DB_SLAVE );
		$cl   = $dbr->tableName( 'categorylinks' );
		$qcat = $dbr->addQuotes( Title::newFromText( $cat )->getDBkey() );
		if( !$dbr->selectRow( $cl, 'cl_from', "cl_to = $qcat" ) ) {
			$title = Title::newFromText( $cat, NS_CATEGORY );
			$article = new Article( $title );
			$article->doDelete( "Obsolete category page - this category contains no items" );
		}
	}
}
 

Article queries using DPL

This DPL query example provides a standard list of page links in wikitext bullet list format.

{{#dpl:category=Foo|format=,*[[%PAGE%]],\n,}}

Misc

examineBraces

This function returns an array of the brace structure found in the passed wikitext parameter. This has also been implemented in Perl, see the wikiExamineBraces function in wiki.pl.

 
function examineBraces( &$content ) {
	$braces = array();
	$depths = array();
	$depth = 1;
	$index = 0;
	while( preg_match( '/\\{\\{\\s*([#a-z0-9_]*:?)|\\}\\}/is', $content, $match, PREG_OFFSET_CAPTURE, $index ) ) {
		$index = $match[0][1] + 2;
		if( $match[0][0] == '}}' ) {
			$brace =& $braces[$depths[$depth-1]];
			$brace[LENGTH] = $match[0][1] - $brace[OFFSET] + 2;
			$brace[DEPTH]  = $depth--;
		}
		else {
			$depths[$depth++] = count( $braces );
			$braces[] = array(
				NAME   => $match[1][0],
				OFFSET => $match[0][1]
			);
		}
	}
	return $braces;
}
 


The following input,

foo{{#bar:baz|biz{{foo|shmoo}}}}{{moo}}baz


Gives the following array:

Array(
    [0] => Array(
        [NAME]   => #bar
        [OFFSET] => 3
        [LENGTH] => 29
        [DEPTH]  => 1
        )

    [1] => Array(
        [NAME]   => foo
        [OFFSET] => 17
        [LENGTH] => 13
        [DEPTH]  => 2
        )

    [2] => Array(
        [NAME]   => moo
        [OFFSET] => 32
        [LENGTH] => 7
        [DEPTH]  => 1
        )
    )

The array output is designed to integrate with the substr_replace function arguments subject, replace, offset, length. The index 0, 1, 2 order referes to the order functions are found (from left to right).

Google Analytics

 
$wgExtensionFunctions[] = 'wfGoogleAnalytics';
function wfGoogleAnalytics() {
	global $wgOut;
	$wgOut->addScript(
		'<script type="text/javascript">
		var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
		document.write(unescape("%3Cscript src=\'" + gaJsHost + "google-analytics.com/ga.js\' type=\'text/javascript\'%3E%3C/script%3E"));
		</script><script type="text/javascript">
		var pageTracker = _gat._getTracker("INSERT YOUR TRACKING CODE HERE");
		pageTracker._trackPageview();</script>'
	);
}
 

Force all headings to use outline numbering

There is a user-preference to make all headings use outline numbering, but no way to make that a default for all users. Here's a few lines of code which can be added to your LocalSettings.php file which do that.

 
$wgExtensionFunctions[] = 'wfNumberHeadings';
function wfNumberHeadings() {
	global $wgUser;
	$wgUser->setOption('numberheadings', true);
}
 

Integrating with the Skin

Adding a new action tab

The following code adds a new action tab with a corresponding link. To process the new action, add a processing function to the UnknownAction hook.

 
$wgExampleAction = 'example';
 
$wgHooks['SkinTemplateTabs'][] = 'wfAddExampleTab';
 
function wfAddExampleTab( &$skin, &$actions ) {
	global $wgTitle, $wgRequest, $wgExampleAction;
	$selected = $wgRequest->getText( 'action' ) == $wgExampleAction ? 'selected' : false;
	$url      = $wgTitle->getLocalURL( "action=$wgExampleAction" );
	if (is_object( $wgTitle ) ) {
		$actions[$wgExampleAction] = array(
			'text'  => $wgExampleAction, # should use wfMsg( $wgExampleAction )
			'class' => $selected,
			'href'  => $url
		);
	}
	return true;
}
 


The Vector skin (optional on 1.16, default on 1.17 onwards) uses a new hook, SkinTemplateNavigation. You can cover both Vector and earlier skins like Monobook or Modern by including code for both hooks. The example below uses the MediaWiki namespace to name the relevant elements of the array. You can put this code in LocalSettings after the inclusion of extensions.

 
$wgHooks['SkinTemplateTabs'][] = 'onSkinTemplateTabs';
$wgHooks['SkinTemplateNavigation'][] = 'onSkinTemplateNavigation';
 
function onSkinTemplateTabs( $skin, &$actions ) {
	$actions[wfMsg( 'edit-addtab-id' )] = array(
		'class' => wfMsg( 'edit-addtab-class' ),
		'text' => wfMsg( 'edit-addtab-text' ),
		'href' => wfMsg( 'edit-addtab-href' )
	);		
	return true;
}
 
function onSkinTemplateNavigation( &$skin, &$contentActions ) {
	$contentActions['views'][wfMsg( 'edit-addtab-id' )] = array(
		'class' => wfMsg( 'edit-addtab-class' ),
		'text' => wfMsg( 'edit-addtab-text' ),
		'href' => wfMsg( 'edit-addtab-href' )
	);
	return true;
}
 

Wikitext in Sidebar

By default the MediaWiki:Sidebar article content is not normal wikitext. This snippet fixes that and also ensures that the content can fall back to an i18n message if the article is not present - the normal behaviour for articles in the MediaWiki namespace. Replace a section such as toolbox or navlinks in the skins/MonoBook.php file with the following:

 
global $wgUser, $wgTitle, $wgParser;
$title = 'sidebar'; # note the lcfirst is important here since it's also a msg key
$article = new Article( Title::newFromText( $title, NS_MEDIAWIKI ) );
$text = $article->fetchContent();
if ( empty( $text ) ) $text = wfMsg( $title );
if ( is_object( $wgParser ) ) { $psr = $wgParser; $opt = $wgParser->mOptions; }
else { $psr = new Parser; $opt = NULL; }
if ( !is_object( $opt ) ) $opt = ParserOptions::newFromUser( $wgUser );
echo $psr->parse( $text, $wgTitle, $opt, true, true )->getText();
 


The procedure is the same with Vector-based skins, the change is you now place the code inside divs of id portal and then id body, as per the other examples in skins/Vector.php. It should replace the following code and/or the toolbox:

 
<?php $this->renderPortals( $this->data['sidebar'] ); ?>
 

Development server identification

The idea is to have a picture or text that allows a developer to quickly see that they are on the development instance and not the production instance of MediaWiki. By adding to the content of $wgSiteNotice in a conditional every article will have the change for that domain map.

 
if( $wgServer == "http://localhost" )  # or any other domain
  $wgSiteNotice = '<span style="position:absolute;top:0px;left:-140px;z-index:10">[[Image:Gnome-devel.png|100px]]</span>';
 

This approach was based on W:User:East718/include, and the image source is Image:Gnome-devel.svg

Importing and Exporting articles

The following snippet imports articles from the XML export file named in $file into the main namespace. The $resultCount variable holds the number of articles imported on success.

 
$source = ImportStreamSource::newFromFile( $file );
$importer = new WikiImporter( $source );
$importer->setTargetNamespace( NS_MAIN );
$reporter = new ImportReporter( $importer, false, false, false );
$reporter->open();
$result = $importer->doImport();
$resultCount = $reporter->close();
if( WikiError::isError( $result ) ) die( $result->getMessage() );
elseif( WikiError::isError( $resultCount ) ) die( $resultCount->getMessage() );
 


This one exports the current revision of all articles whose names are contained in the $pages array to the file named in $file.

 
$dbr = wfGetDB( DB_SLAVE );
$exporter = new WikiExporter( $dbr, WikiExporter::CURRENT, WikiExporter::BUFFER );
$exporter->list_authors = false;
$exporter->sink = new DumpFileOutput( $file );
$exporter->openStream();
foreach( $pages as $page ) {
	$title = Title::newFromText( $page );
	$exporter->pageByTitle( $title );
}
$exporter->closeStream();
fclose( $exporter->sink->handle );
 

See also