Resent-Date: Mon, 1 Feb 1999 04:09:58 +0100 (MET)
From: micha@schmitzm.hip.berkeley.edu (Michael Schmitz)
Subject: Patch for 2.2.0pre7
To: linux-m68k@lists.linux-m68k.org
Date: Sun, 31 Jan 1999 18:48:32 -0800 (PST)
Cc: jes.sorensen@cern.ch, linux-mac68k@baltimore.wwaves.com
Reply-To: mschmitz@lbl.gov
Resent-From: linux-m68k@phil.uni-sb.de

Hi,

the following patch reverts changes to the m68k mm code introduced in
2.2.0 back to the 2.1.131 state. With this patch applied on top
of 2.2.0pre7 with Andreas' head.S diff applied, both Mac SE/30
and Falcon boot and run OK. 

Due to the widespread changes in head.S connected with the mm 
changes, I've also had to revert head.S to my 2.1.131 version. 
Also note that ioremap() is missing after you apply this patch.
The Atari framebuffer code reverts to using kernel_map, the 
Mac code never used ioremap, and I couldn't test anything else
(and didn't touch anything else, consequently). 

This patch will probably break some Amiga code, but frankly, 
I could care less. Its main purpose is to show that the problems
I had on Mac and Atari are located in the mm code (with a big
whopping mmu_engage bug still out in head.S for 030 Macs). 
Fixing a silly oversimplification in the Mac video mapping 
code in head.S just wasn't enough, I had to swap back the known 
working Mac head.S. The other Mac people can at least now go 
play with 2.2 and try to find out exactly what part of my diff
actually did the trick.

As I understand it, the jury is still out on the ioremap issue 
anyway; I'll wait for the dust to settle on that part before 
trying any further 2.2 changes.

	Michael


--- linux-2.2.0pre7/arch/m68k/kernel/head.S.rz	Sun Jan 31 14:15:30 1999
+++ linux-2.2.0pre7/arch/m68k/kernel/head.S	Sun Jan 31 14:15:42 1999
@@ -8,7 +8,6 @@
 ** 68040 fixes by Michael Rausch
 ** 68060 fixes by Roman Hodek
 ** MMU cleanup by Randy Thelen
-** Final MMU cleanup by Roman Zippel
 **
 ** Atari support by Andreas Schwab, using ideas of Robert de Vries
 ** and Bjoern Brauel
@@ -29,7 +28,7 @@
 ** for more details.
 **
 */
-
+	
 /*
  * Linux startup code.
  *
@@ -48,13 +47,13 @@
  * .  Jump to kernel startup
  *
  * Much of the file restructuring was to accomplish:
- * 1) Remove register dependency through-out the file.
+ * 1) Reduce register dependency through-out the file.
  * 2) Increase use of subroutines to perform functions
  * 3) Increase readability of the code
  *
  * Of course, readability is a subjective issue, so it will never be
  * argued that that goal was accomplished.  It was merely a goal.
- * A key way to help make code more readable is to give good
+ * A key way to help make code more readable is to give good 
  * documentation.  So, the first thing you will find is exaustive
  * write-ups on the structure of the file, and the features of the
  * functional subroutines.
@@ -63,7 +62,7 @@
  * ------------------
  *	Without a doubt the single largest chunk of head.S is spent
  * mapping the kernel and I/O physical space into the logical range
- * for the kernel.
+ * for the kernel.  
  *	There are new subroutines and data structures to make MMU
  * support cleaner and easier to understand.
  * 	First, you will find a routine call "mmu_map" which maps
@@ -83,72 +82,72 @@
  * also only engaged in debug mode.  Currently, it's only supported
  * on the Macintosh class of machines.  However, it is hoped that
  * others will plug-in support for specific machines.
- *
+ * 
  * ######################################################################
- *
+ * 
  * mmu_map
  * -------
  *	mmu_map was written for two key reasons.  First, it was clear
  * that it was very difficult to read the previous code for mapping
  * regions of memory.  Second, the Macintosh required such extensive
- * memory allocations that it didn't make sense to propogate the
+ * memory allocations that it didn't make sense to propogate the 
  * existing code any further.
  *	mmu_map requires some parameters:
- *
+ * 
  *	mmu_map (logical, physical, length, cache_type)
- *
+ * 
  *	While this essentially describes the function in the abstract, you'll
  * find more indepth description of other parameters at the implementation site.
  * 
- * mmu_get_root_table_entry
- * ------------------------
- * mmu_get_ptr_table_entry
+ * mmu_get_page_table
+ * ------------------
+ * mmu_get_pointer_table
+ * ---------------------
+ *	These routines are used by mmu_map to get fresh tables.  They
+ * will allocate a new page of memory and consume page tables from that page
+ * until the page has been exausted.  Unfortunately, the kernel code uses
+ * a wacky and not very efficient mechanism for re-using pages of memory
+ * allocated for page tables.  Therefore, while this code does set the kpt
+ * global to a correct value upon initial usage, it doesn't help.
+ * 
+ * mmu_clear_root_table
+ * --------------------
+ * mmu_clear_pointer_table
  * -----------------------
- * mmu_get_page_table_entry
- * ------------------------
+ * mmu_clear_page_table
+ * --------------------
+ *	Given a pointer to a table, these routines will clear it.
+ * Sometimes writing well factored code can be a source of pride.
  * 
- *	These routines are used by other mmu routines to get a pointer into
- * a table, if necessary a new table is allocated. These routines are working
- * basically like pmd_alloc() and pte_alloc() in <asm/pgtable.h>. The root
- * table needs of course only to be allocated once in mmu_get_root_table_entry,
- * so that here also some mmu specific initialization is done. The second page
- * at the start of the kernel (the first page is unmapped later) is used for
- * the kernel_pg_dir. It must be at a position known at link time (as it's used
- * to initialize the init task struct) and since it needs special cache
- * settings, it's the easiest to use this page, the rest of the page is used
- * for further pointer tables.
- * mmu_get_page_table_entry allocates always a whole page for page tables, this
- * means 1024 pages and so 4MB of memory can be mapped. It doesn't make sense
- * to manage page tables in smaller pieces as nearly all mappings have that
- * size.
- *
  * ######################################################################
- *
- *
+ * 
+ * mmu_init
+ * --------
+ *	Here is where the MMU is initialized for the various platforms.
+ * First, the kernel is mapped for all platforms at the address computed
+ * as the current address (which is known to be physical) and mapped down
+ * to logical 0x01000.  Then there is logic on a per-machine basis.
+ * 
  * ######################################################################
- *
+ * 
  * mmu_engage
  * ----------
- *	Thanks to a small helping routine enabling the mmu got quiet simple
- * and there is only one way left. mmu_engage makes a complete a new mapping
- * that only includes the absolute necessary to be able to jump to the final
- * postion and to restore the original mapping.
- * As this code doesn't need a transparent translation register anymore this
- * means all registers are free to be used by machines that needs them for
- * other purposes.
- *
+ *	The MMU engagement code is quite extensive and there is ample
+ * description of the algorithm in all it's gory detail at the site of the
+ * evil deed.  However, allow me to state that magic takes place there.
+ * 
  * ######################################################################
- *
+ * 
  * mmu_print
  * ---------
  *	This algorithm will print out the page tables of the system as
  * appropriate for an 030 or an 040.  This is useful for debugging purposes
  * and as such is enclosed in #ifdef MMU_PRINT/#endif clauses.
- *
+ * 
  * ######################################################################
- *
- * console_init
- * ------------
+ * 
+ * Lconsole_init
+ * -------------
  *	The console is also able to be turned off.  The console in head.S
  * is specifically for debugging and can be very useful.  It is surrounded by
  * #ifdef CONSOLE/#endif clauses so it doesn't have to ship in known-good
@@ -160,46 +159,50 @@
  *	Also, the algorithm for plotting pixels is abstracted so that in
  * theory other platforms could add support for different kinds of frame
  * buffers.  This could be very useful.
- *
- * console_put_penguin
- * -------------------
+ * 
+ * Lconsole_put_penguin
+ * --------------------
  *	An important part of any Linux bring up is the penguin and there's
  * nothing like getting the Penguin on the screen!  This algorithm will work
  * on any machine for which there is a console_plot_pixel.
- *
+ * 
  * console_scroll
  * --------------
  *	My hope is that the scroll algorithm does the right thing on the
  * various platforms, but it wouldn't be hard to add the test conditions
  * and new code if it doesn't.
- *
+ * 
  * console_putc
  * -------------
- *
+ * 
  * ######################################################################
+ * 
+ *	The only register that is passed through out the system are:
+ * .  A7 -- Stack Pointer (duh)
+ * .  A6 -- Top of Kernel,  available pages are taken from here
+ * .  A5 -- Ptr to Root Table
+ * .  D5 -- Ptr to __start (physical)
+ *	Many other registers are used as passed parameters into
+ * functions or used within functions.
  *
- *	Register usage has greatly simplified within head.S. Every subroutine
- * saves and restores all registers that it modifies (except it returns a
- * value in there of course). So the only register that needs to be initialized
- * is the stack pointer.
- * All other init code and data is now placed in the init section, so it will
- * be automatically freed at the end of the kernel initialization.
+ *	Reducing the register usage from a dozen to a few greatly simplified
+ * head.S.
  *
  * ######################################################################
- *
+ * 
  * options
  * -------
  *	There are many options availble in a build of this file.  I've
  * taken the time to describe them here to save you the time of searching
  * for them and trying to understand what they mean.
- *
+ * 
  * CONFIG_xxx:	These are the obvious machine configuration defines created
  * during configuration.  These are defined in include/linux/autoconf.h.
  *
  * CONSOLE:	There is support for head.S console in this file.  This
  * console can talk to a Mac frame buffer, but could easily be extrapolated
  * to extend it to support other platforms.
- *
+ * 
  * TEST_MMU:	This is a test harness for running on any given machine but
  * getting an MMU dump for another class of machine.  The classes of machines
  * that can be tested are any of the makes (Atari, Amiga, Mac, VME, etc.)
@@ -212,25 +215,28 @@
  *		can be dropped.  Do note that that will clean up the
  *		head.S code significantly as large blocks of #if/#else
  *		clauses can be removed.
- *
+ * 
  * MMU_NOCACHE_KERNEL:	On the Macintosh platform there was an inquiry into
  * determing why devices don't appear to work.  A test case was to remove
  * the cacheability of the kernel bits.
- *
+ * 
  * MMU_PRINT:	There is a routine built into head.S that can display the
- * MMU data structures.  It outputs its result through the serial_putc
+ * MMU data structures.  It outputs its result through the Lserial_putc
  * interface.  So where ever that winds up driving data, that's where the
  * mmu struct will appear.  On the Macintosh that's typically the console.
- *
+ * 
  * SERIAL_DEBUG:	There are a series of putc() macro statements
  * scattered through out the code to give progress of status to the
  * person sitting at the console.  This constant determines whether those
  * are used.
- *
+ * 
  * DEBUG:	This is the standard DEBUG flag that can be set for building
  *		the kernel.  It has the effect adding additional tests into
  *		the code.
- *
+ * 
+ * MMU_PRINT_PAGE_USAGE:
+ *		Print the number of pages used by the MMU tables.
+ * 
  * FONT_6x11:
  * FONT_8x8:
  * FONT_8x16:
@@ -240,13 +246,13 @@
  *		flexible!)  A pointer to the font's struct fbcon_font_desc
  *		is kept locally in Lconsole_font.  It is used to determine
  *		font size information dynamically.
- *
+ * 
  * Atari constants:
  * USE_PRINTER:	Use the printer port for serial debug.
  * USE_SCC_B:	Use the SCC port A (Serial2) for serial debug.
  * USE_SCC_A:	Use the SCC port B (Modem2) for serial debug.
- * USE_MFP:	Use the ST-MFP port (Modem1) for serial debug.
- *
+ * USE_MFP:	Use the ST-MFP port (Modem1) for serial debug. 
+ * 
  * Macintosh constants:
  * MAC_SERIAL_DEBUG:	Turns on serial debug output for the Macintosh.
  * MAC_USE_SCC_A:	Use the SCC port A (modem) for serial debug.
@@ -255,33 +261,31 @@
 
 #include <linux/config.h>
 #include <linux/linkage.h>
-#include <linux/init.h>
 #include <asm/bootinfo.h>
 #include <asm/setup.h>
 #include <asm/pgtable.h>
-#include "m68k_defs.h"
-
-#ifdef CONFIG_MAC
+#if defined(CONFIG_MAC)
+#include <video/font.h>		/* offsets for struct fbcon_font_desc */
+#endif
 
-#include <asm/machw.h>
+#if defined(CONFIG_MAC)
 
 /*
  * Macintosh console support
  */
-
 #define CONSOLE
 
 /*
- * Macintosh serial debug support; outputs boot info to the printer
+ * Macintosh serial debug support; outputs boot info to the printer 
  *   and/or modem serial ports
  */
 #undef MAC_SERIAL_DEBUG
 
 /*
- * Macintosh serial debug port selection; define one or both;
+ * Macintosh serial debug port selection; define one or both; 
  *   requires MAC_SERIAL_DEBUG to be defined
  */
-#define MAC_USE_SCC_A		/* Macintosh modem serial port */
+#undef  MAC_USE_SCC_A		/* Macintosh modem serial port */
 #define MAC_USE_SCC_B		/* Macintosh printer serial port */
 
 #endif	/* CONFIG_MAC */
@@ -290,24 +294,61 @@
 #undef MMU_NOCACHE_KERNEL
 #define SERIAL_DEBUG
 #undef DEBUG
+#undef MMU_PRINT_PAGE_USAGE
 
 /*
  * For the head.S console, there are three supported fonts, 6x11, 8x16 and 8x8.
  * The 8x8 font is harder to read but fits more on the screen.
  */
 #define FONT_8x8 	/* default */
-/* #define FONT_8x16 */	/* 2nd choice */
-/* #define FONT_6x11 */	/* 3rd choice */
-
+/* #define FONT_8x16	/* 2nd choice */
+/* #define FONT_6x11	/* 3rd choice */
+	
 .globl SYMBOL_NAME(kernel_pg_dir)
+.globl SYMBOL_NAME(kpt)
 .globl SYMBOL_NAME(availmem)
 .globl SYMBOL_NAME(m68k_pgtable_cachemode)
+.globl SYMBOL_NAME(kernel_pmd_table)
+.globl SYMBOL_NAME(swapper_pg_dir)
+
+#if defined(CONFIG_ATARI)
+.globl SYMBOL_NAME(atari_mch_type)
+#endif
+
+#if defined(CONFIG_MAC)
+.globl SYMBOL_NAME(mac_booter_data)
+.globl SYMBOL_NAME(compat_bi)
+.globl SYMBOL_NAME(mac_videobase)
+.globl SYMBOL_NAME(mac_videodepth)
+.globl SYMBOL_NAME(mac_rowbytes)
+#ifdef MAC_SERIAL_DEBUG
+.globl SYMBOL_NAME(mac_sccbase)
+#endif	/* MAC_SERIAL_DEBUG */
+#endif
+
+#if defined(CONFIG_MVME16x)
+.globl SYMBOL_NAME(mvme_bdid_ptr)
+#endif
+
+/*
+ * Added m68k_supervisor_cachemode for 68060 boards where some drivers
+ * need writethrough caching for supervisor accesses.  Drivers known to
+ * be effected are 53c7xx.c and apricot.c (when used on VME boards).
+ * Richard Hirst.
+ */
+
+#ifdef CONFIG_060_WRITETHROUGH
 .globl SYMBOL_NAME(m68k_supervisor_cachemode)
+#endif
+
+D6B_0460 = 16		/* indicates 680[46]0 in d6 */
+D6B_060  = 17		/* indicates 68060 in d6 */
+D6F_040  = 1<<D6B_0460
+D6F_060  = (1<<D6B_0460)+(1<<D6B_060)
 
 CPUTYPE_040	= 1	/* indicates an 040 */
 CPUTYPE_060	= 2	/* indicates an 060 */
 CPUTYPE_0460	= 3	/* if either above are set, this is set */
-CPUTYPE_020	= 4	/* indicates an 020 */
 
 /* Translation control register */
 TC_ENABLE = 0x8000
@@ -364,182 +405,111 @@
 PTR_INDEX_SHIFT  = 18
 PAGE_INDEX_SHIFT = 12
 
-#ifdef DEBUG
-/* When debugging use readable names for labels */
-#ifdef __STDC__
-#define L(name) .head.S.##name
-#else
-#define L(name) .head.S./**/name
-#endif
-#else
-#ifdef __STDC__
-#define L(name) .L##name
-#else
-#define L(name) .L/**/name
-#endif
-#endif
-
-/* Several macros to make the writing of subroutines easier:
- * - func_start marks the beginning of the routine which setups the frame
- *   register and saves the registers, it also defines another macro
- *   to automatically restore the registers again.
- * - func_return marks the end of the routine and simply calls the prepared
- *   macro to restore registers and jump back to the caller.
- * - func_define generates another macro to automatically put arguments
- *   onto the stack call the subroutine and cleanup the stack again.
- */
-
-/* Within subroutines these macros can be used to access the arguments
- * on the stack. With STACK some allocated memory on the stack can be
- * accessed and ARG0 points to the return address (used by mmu_engage).
- */
-#define	STACK	%a6@(stackstart)
-#define ARG0	%a6@(4)
-#define ARG1	%a6@(8)
-#define ARG2	%a6@(12)
-#define ARG3	%a6@(16)
-#define ARG4	%a6@(20)
-
-.macro	func_start	name,saveregs,stack=0
-L(\name):
-	linkw	%a6,#-\stack
-	moveml	\saveregs,%sp@-
-.set	stackstart,-\stack	
-
-.macro	func_return_\name
-	moveml	%sp@+,\saveregs
-	unlk	%a6
-	rts
-.endm
-.endm
-
-.macro	func_return	name
-	func_return_\name
-.endm
-
-.macro	func_call	name
-	jbsr	L(\name)
-.endm
-
-.macro	move_stack	nr,arg1,arg2,arg3,arg4
-.if	\nr
-	move_stack	"(\nr-1)",\arg2,\arg3,\arg4
-	movel	\arg1,%sp@-
-.endif
-.endm
-
-.macro	func_define	name,nr=0
-.macro	\name	arg1,arg2,arg3,arg4
-	move_stack	\nr,\arg1,\arg2,\arg3,\arg4
-	func_call	\name
-.if	\nr
-	lea	%sp@(\nr*4),%sp
-.endif
-.endm
-.endm
-
-func_define	mmu_map,4
-func_define	mmu_map_tt,4
-func_define	mmu_fixup_page_mmu_cache,1
-func_define	mmu_temp_map,2
-func_define	mmu_engage
-func_define	mmu_get_root_table_entry,1
-func_define	mmu_get_ptr_table_entry,2
-func_define	mmu_get_page_table_entry,2
-func_define	mmu_print
-func_define	get_new_page
-#ifdef CONFIG_HP300
-func_define	set_leds
-#endif
-
-.macro	mmu_map_eq	arg1,arg2,arg3
-	mmu_map	\arg1,\arg1,\arg2,\arg3
-.endm
-
-.macro	get_bi_record	record
-	pea	\record
-	func_call	get_bi_record
-	addql	#4,%sp
-.endm
-
-func_define	serial_putc,1
-func_define	console_putc,1
-
-.macro	putc	ch
-#if defined(CONSOLE) || defined(SERIAL_DEBUG)
-	pea	\ch
-#endif
-#ifdef CONSOLE
-	func_call	console_putc
-#endif
-#ifdef SERIAL_DEBUG
-	func_call	serial_putc
-#endif
-#if defined(CONSOLE) || defined(SERIAL_DEBUG)
-	addql	#4,%sp
-#endif
-.endm
-
-.macro	dputc	ch
-#ifdef DEBUG
-	putc	\ch
-#endif
-.endm
-
-func_define	putn,1
-
-.macro	dputn	nr
-#ifdef DEBUG
-	putn	\nr
-#endif
-.endm
-
-.macro	puts		string
-#if defined(CONSOLE) || defined(SERIAL_DEBUG)
-	__INITDATA
-.Lstr\@:
-	.string	"\string"
-	__FINIT
-	pea	%pc@(.Lstr\@)
-	func_call	puts
-	addql	#4,%sp
-#endif
-.endm
-
-.macro	dputs	string
-#ifdef DEBUG
-	puts	"\string"
-#endif
-.endm
-
+TABLENR_4MB	= 16	/* # of page tables needed to page 4 MB */
+TABLENR_16MB	= 64	/* same for 16 MB */
 
+#if !defined(CONSOLE) && !defined(DEBUG) && defined(SERIAL_DEBUG)
+#  define putc_trace(c)			  \
+		movel	%d7,%sp@-	; \
+		moveb	&c,%d7		; \
+		jbsr	Lserial_putc	; \
+		movel	%sp@+,%d7	;
+#  define putc(c)
+#  define puts(x)
+#  define putr()			  \
+		movel	%d7,%sp@-	; \
+		moveb	#13,%d7		; \
+		jbsr	Lserial_putc	; \
+		moveb	#10,%d7		; \
+		jbsr	Lserial_putc	; \
+		movel	%sp@+,%d7	;
+#  define putn(nr)
+#elif defined(DEBUG) || defined(CONSOLE)
+#  define putc_trace(c)			  \
+		movel	%d7,%sp@-	; \
+		moveb	&c,%d7		; \
+		jbsr	Lserial_putc	; \
+		movel	%sp@+,%d7	;
+#  define putc(c)	putc_trace(c)
+#  define puts(x)			  \
+		movel	%a0,%sp@-	; \
+		lea	%pc@(897f),%a0	; \
+		jbsr	Lserial_puts	; \
+		movel	%sp@+,%a0	; \
+		jbra	898f		; \
+	897:	.string	x		; \
+		.byte	0		; \
+		.even			; \
+	898:
+#  define putr()			  \
+		movel	%d7,%sp@-	; \
+		moveb	#13,%d7		; \
+		jbsr	Lserial_putc	; \
+		moveb	#10,%d7		; \
+		jbsr	Lserial_putc	; \
+		movel	%sp@+,%d7	;
+#  define putn(nr)			  \
+		movel	%d7,%sp@-	; \
+		movel	nr,%d7		; \
+		jbsr	Lserial_putnum	; \
+		movel	%sp@+,%d7	;
+#else /* ! DEBUG && ! SERIAL_DEBUG */
+#  define putc_trace(c)
+#  define putc(c)
+#  define puts(x)
+#  define putr()
+#  define putn(nr)	
+#endif
+
+/*
+ * mmu_map() register usage
+ *
+ * Here a symbolic names for the mmu_map() parameters.
+ */
+#define MAP_PHYS	%a1
+#define MAP_LOG		%a0
+#define MAP_CACHE	%d1
+#define MAP_LENGTH	%d0
+
+#define MMU_MAP(log,phys,leng,cache)	movel	log,MAP_LOG; \
+					movel	phys,MAP_PHYS; \
+					movel	leng,MAP_LENGTH; \
+					movel	cache,MAP_CACHE; \
+					jbsr	mmu_map
+	
+#define MMU_MAP_EQ(addr,leng,cache)	movel	addr,MAP_LOG; \
+					movel	MAP_LOG,MAP_PHYS; \
+					movel	leng,MAP_LENGTH; \
+					movel	cache,MAP_CACHE; \
+					jbsr	mmu_map
+	
+#define MMU_MAP_TT(addr,leng,cache)	movel	addr,MAP_LOG; \
+					movel	MAP_LOG,MAP_PHYS; \
+					movel	leng,MAP_LENGTH; \
+					movel	cache,MAP_CACHE; \
+					jbsr	mmu_map_tt
+	
 #define is_not_amiga(lab) cmpl &MACH_AMIGA,%pc@(m68k_machtype); jne lab
 #define is_not_atari(lab) cmpl &MACH_ATARI,%pc@(m68k_machtype); jne lab
 #define is_not_mac(lab) cmpl &MACH_MAC,%pc@(m68k_machtype); jne lab
 #define is_not_mvme16x(lab) cmpl &MACH_MVME16x,%pc@(m68k_machtype); jne lab
 #define is_not_bvme6000(lab) cmpl &MACH_BVME6000,%pc@(m68k_machtype); jne lab
 #define is_not_hp300(lab) cmpl &MACH_HP300,%pc@(m68k_machtype); jne lab
-
-#define is_040_or_060(lab)	btst &CPUTYPE_0460,%pc@(L(cputype)+3); jne lab
-#define is_not_040_or_060(lab)	btst &CPUTYPE_0460,%pc@(L(cputype)+3); jeq lab
-#define is_040(lab)		btst &CPUTYPE_040,%pc@(L(cputype)+3); jne lab
-#define is_060(lab)		btst &CPUTYPE_060,%pc@(L(cputype)+3); jne lab
-#define is_not_060(lab)		btst &CPUTYPE_060,%pc@(L(cputype)+3); jeq lab
-#define is_020(lab)		btst &CPUTYPE_020,%pc@(L(cputype)+3); jne lab
-#define is_not_020(lab)		btst &CPUTYPE_020,%pc@(L(cputype)+3); jeq lab
+	
+#define is_040_or_060(lab)	btst &CPUTYPE_0460,%pc@(Lcputype+3); jne lab
+#define is_not_040_or_060(lab)	btst &CPUTYPE_0460,%pc@(Lcputype+3); jeq lab
+#define is_040(lab)		btst &CPUTYPE_040,%pc@(Lcputype+3); jne lab
+#define is_060(lab)		btst &CPUTYPE_060,%pc@(Lcputype+3); jne lab
+#define is_not_060(lab)		btst &CPUTYPE_060,%pc@(Lcputype+3); jeq lab
 
 /* On the HP300 we use the on-board LEDs for debug output before
    the console is running.  Writing a 1 bit turns the corresponding LED
    _off_ - on the 340 bit 7 is towards the back panel of the machine.  */
-.macro	leds	mask
 #ifdef CONFIG_HP300
-	is_not_hp300(.Lled\@)
-	pea	\mask
-	func_call	set_leds
-	addql	#4,%sp
-.Lled\@:
+#define leds(x) is_not_hp300(42f) ; moveb #(x),%d7 ; jbsr Lset_leds; 42:
+#else
+#define leds(x)
 #endif
-.endm
 
 .text
 ENTRY(_stext)
@@ -556,84 +526,99 @@
 	.long	MACH_BVME6000, BVME6000_BOOTI_VERSION
 	.long	MACH_MAC, MAC_BOOTI_VERSION
 	.long	0
-1:	jra	SYMBOL_NAME(__start)
+1:	jra	SYMBOL_NAME(_start)
 
-.equ	SYMBOL_NAME(kernel_pg_dir),SYMBOL_NAME(_stext)
+.equ	SYMBOL_NAME(kernel_pmd_table),SYMBOL_NAME(_stext)
+.equ	SYMBOL_NAME(kernel_pg_dir),SYMBOL_NAME(kernel_pmd_table)
+.equ	SYMBOL_NAME(swapper_pg_dir),SYMBOL_NAME(kernel_pg_dir)+(ROOT_TABLE_SIZE<<2)
+.equ	Lavail_pmd_table,SYMBOL_NAME(swapper_pg_dir)+(ROOT_TABLE_SIZE<<2)
 
 .equ	.,SYMBOL_NAME(_stext)+PAGESIZE
 
 ENTRY(_start)
-	jra	SYMBOL_NAME(__start)
-__INIT
-ENTRY(__start)
 
 /*
  * Setup initial stack pointer
  */
 	lea	%pc@(SYMBOL_NAME(_stext)),%sp
-
+	
 /*
  * Record the CPU and machine type.
  */
 
-	get_bi_record	BI_MACHTYPE
-	lea	%pc@(SYMBOL_NAME(m68k_machtype)),%a1
-	movel	%a0@,%a1@
-
-	get_bi_record	BI_FPUTYPE
-	lea	%pc@(SYMBOL_NAME(m68k_fputype)),%a1
-	movel	%a0@,%a1@
-
-	get_bi_record	BI_MMUTYPE
-	lea	%pc@(SYMBOL_NAME(m68k_mmutype)),%a1
-	movel	%a0@,%a1@
-
-	get_bi_record	BI_CPUTYPE
-	lea	%pc@(SYMBOL_NAME(m68k_cputype)),%a1
-	movel	%a0@,%a1@
+	movew	#BI_MACHTYPE,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(m68k_machtype)),%a0
+	movel	%d0,%a0@
+	movew	#BI_FPUTYPE,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(m68k_fputype)),%a0
+	movel	%d0,%a0@
+	movew	#BI_MMUTYPE,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(m68k_mmutype)),%a0
+	movel	%d0,%a0@
+	movew	#BI_CPUTYPE,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(m68k_cputype)),%a0
+	movel	%d0,%a0@
 
-#ifdef CONFIG_MAC
+#if defined(CONFIG_MAC)
 /*
- * For Macintosh, we need to determine the display parameters early (at least
+ * For Macintosh, we need to determine the display parameters early (at least 
  * while debugging it).
  */
 
-	is_not_mac(L(test_notmac))
+	is_not_mac(Ltest_notmac)
 
-	get_bi_record	BI_MAC_VADDR
-	lea	%pc@(L(mac_videobase)),%a1
-	movel	%a0@,%a1@
+	movew	#BI_MAC_VADDR,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
+	movel	%d0,%a0@
 
-	get_bi_record	BI_MAC_VDEPTH
-	lea	%pc@(L(mac_videodepth)),%a1
-	movel	%a0@,%a1@
+	movew	#BI_MAC_VDEPTH,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(mac_videodepth)),%a0
+	movel	%d0,%a0@
 
-	get_bi_record	BI_MAC_VDIM
-	lea	%pc@(L(mac_dimensions)),%a1
-	movel	%a0@,%a1@
+	movew	#BI_MAC_VDIM,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(mac_dimensions)),%a0
+	movel	%d0,%a0@
 
-	get_bi_record	BI_MAC_VROW
-	lea	%pc@(L(mac_rowbytes)),%a1
-	movel	%a0@,%a1@
+	movew	#BI_MAC_VROW,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
+	movel	%d0,%a0@
 
 #ifdef MAC_SERIAL_DEBUG
-	get_bi_record	BI_MAC_SCCBASE
-	lea	%pc@(L(mac_sccbase)),%a1
-	movel	%a0@,%a1@
+	movew	#BI_MAC_SCCBASE,%d0
+	jbsr	Lget_bi_record
+	movel	%a0@,%d0
+	lea	%pc@(SYMBOL_NAME(mac_sccbase)),%a0
+	movel	%d0,%a0@
 #endif /* MAC_SERIAL_DEBUG */
 
 #if 0
 	/*
 	 * Clear the screen
 	 */
-	lea	%pc@(L(mac_videobase)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
 	movel	%a0@,%a1
-	lea	%pc@(L(mac_dimensions)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_dimensions)),%a0
 	movel	%a0@,%d1
 	swap	%d1		/* #rows is high bytes */
 	andl	#0xFFFF,%d1	/* rows */
 	subl	#10,%d1
-	lea	%pc@(L(mac_rowbytes)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
 loopy2:
 	movel	%a0@,%d0
 	subql	#1,%d0
@@ -642,8 +627,13 @@
 	dbra	%d0,loopx2
 	dbra	%d1,loopy2
 #endif
+	/*
+	 * clobbered %d0,so restore it
+	 */
+	lea	%pc@(SYMBOL_NAME(m68k_cputype)),%a0
+	movel	%a0@,%d0
 
-L(test_notmac):
+Ltest_notmac:
 #endif /* CONFIG_MAC */
 
 
@@ -653,44 +643,35 @@
  * and is converted here from a booter type definition to a separate bit
  * number which allows for the standard is_0x0 macro tests.
  */
-	movel	%pc@(SYMBOL_NAME(m68k_cputype)),%d0
 	/*
-	 * Assume it's an 030
+	 * Test the BootInfo cputype for 060
 	 */
-	clrl	%d1
-
 	/*
-	 * Test the BootInfo cputype for 060
+	 * Assume it's an 020/030
 	 */
+	clrl	%d6
+	
 	btst	#CPUB_68060,%d0
 	jeq	1f
-	bset	#CPUTYPE_060,%d1
-	bset	#CPUTYPE_0460,%d1
-	jra	3f
-1:
-	/*
+	bset	#CPUTYPE_060,%d6
+	bset	#CPUTYPE_0460,%d6
+	jra	2f
+	
+1:	/*
 	 * Test the BootInfo cputype for 040
 	 */
 	btst	#CPUB_68040,%d0
 	jeq	2f
-	bset	#CPUTYPE_040,%d1
-	bset	#CPUTYPE_0460,%d1
-	jra	3f
+	bset	#CPUTYPE_040,%d6
+	bset	#CPUTYPE_0460,%d6
+	
 2:
 	/*
-	 * Test the BootInfo cputype for 020
-	 */
-	btst	#CPUB_68020,%d0
-	jeq	3f
-	bset	#CPUTYPE_020,%d1
-	jra	3f
-3:
-	/*
 	 * Record the cpu type
 	 */
-	lea	%pc@(L(cputype)),%a0
-	movel	%d1,%a0@
-
+	lea	%pc@(Lcputype),%a0
+	movel	%d6,%a0@
+	
 	/*
 	 * NOTE:
 	 *
@@ -701,46 +682,50 @@
 	 *	is_060
 	 *	is_not_060
 	 */
-
+	
 	/*
 	 * Determine the cache mode for pages holding MMU tables
-	 * and for supervisor mode, unused for '020 and '030
 	 */
-	clrl	%d0
-	clrl	%d1
-
-	is_not_040_or_060(L(save_cachetype))
-
+	is_not_040_or_060(Lcachetype020)
+	
 	/*
 	 * '040 or '060
-	 * d1 := cacheable write-through
+	 * d6 := cacheable write-through 
 	 * NOTE: The 68040 manual strongly recommends non-cached for MMU tables,
 	 * but we have been using write-through since at least 2.0.29 so I
 	 * guess it is OK.
 	 */
-#ifdef CONFIG_060_WRITETHROUGH
+
+#if defined(CONFIG_060_WRITETHROUGH)
 	/*
 	 * If this is a 68060 board using drivers with cache coherency
 	 * problems, then supervisor memory accesses need to be write-through
 	 * also; otherwise, we want copyback.
 	 */
 
-	is_not_060(1f)
-	movel	#_PAGE_CACHE040W,%d0
-	jra	L(save_cachetype)
+	movel	#_PAGE_CACHE040W,%d6
+	is_060(Lset_sup)
+	move.w	#_PAGE_CACHE040,%d6
+Lset_sup:
+  	lea	%pc@(SYMBOL_NAME(m68k_supervisor_cachemode)),%a0
+	movel	%d6,%a0@
 #endif /* CONFIG_060_WRITETHROUGH */
-1:
-	movew	#_PAGE_CACHE040,%d0
 
-	movel	#_PAGE_CACHE040W,%d1
+	movel	#_PAGE_CACHE040W,%d6
+	jbra	Lsave_cachetype
 
-L(save_cachetype):
-	/* Save cache mode for supervisor mode and page tables
+Lcachetype020:
+	/*
+	 * '020 or '030
+	 * d6 := cache bits unused (after mapping the kernel!)
+	 */
+	moveql	#0,%d6
+Lsave_cachetype:
+	/*
+	 * Save cache mode for page tables
 	 */
-	lea	%pc@(SYMBOL_NAME(m68k_supervisor_cachemode)),%a0
-	movel	%d0,%a0@
 	lea	%pc@(SYMBOL_NAME(m68k_pgtable_cachemode)),%a0
-	movel	%d1,%a0@
+	movel	%d6,%a0@
 
 /*
  * raise interrupt level
@@ -767,122 +752,183 @@
  */
 
 #ifdef CONFIG_ATARI
-	is_not_atari(L(notypetest))
+	is_not_atari(Lnotypetest)
 
 	/* get special machine type (Medusa/Hades/AB40) */
 	moveq	#0,%d3 /* default if tag doesn't exist */
-	get_bi_record	BI_ATARI_MCH_TYPE
+	movew	#BI_ATARI_MCH_TYPE,%d0
+	jbsr	Lget_bi_record
 	tstl	%d0
 	jbmi	1f
 	movel	%a0@,%d3
 	lea	%pc@(SYMBOL_NAME(atari_mch_type)),%a0
 	movel	%d3,%a0@
-1:
+1:	
 	/* On the Hades, the iobase must be set up before opening the
 	 * serial port. There are no I/O regs at 0x00ffxxxx at all. */
 	moveq	#0,%d0
 	cmpl	#ATARI_MACH_HADES,%d3
 	jbne	1f
 	movel	#0xff000000,%d0		/* Hades I/O base addr: 0xff000000 */
-1:	lea     %pc@(L(iobase)),%a0
+1:	lea     %pc@(Liobase),%a0
 	movel   %d0,%a0@
-
-L(notypetest):
+	
+Lnotypetest:
 #endif
 
 /*
  * Initialize serial port
  */
-	jbsr	L(serial_init)
+	jbsr	Lserial_init
 
 /*
  * Initialize console
  */
 #ifdef CONFIG_MAC
-	is_not_mac(L(nocon))
+	is_not_mac(Lnocon)
 #ifdef CONSOLE
-	jbsr	L(console_init)
+	jbsr	Lconsole_init
 #ifdef CONSOLE_PENGUIN
-	jbsr	L(console_put_penguin)
+	jbsr	Lconsole_put_penguin
 #endif	/* CONSOLE_PENGUIN */
-	jbsr	L(console_put_stats)
+	jbsr	Lconsole_put_stats
 #endif	/* CONSOLE */
-L(nocon):
+Lnocon:
 #endif	/* CONFIG_MAC */
 
+	putr()
+	putc_trace('A')
+
+/*
+ * Get address at end of bootinfo and
+ * round up to a page boundary.
+ * Note: This hack uses two 'features' of the bi_record:
+ *  (1)  When the item searched for isn't found, a0 points
+ *       to the end of the structure;
+ *  (2)  #0 is an invalid (and never present) bi_record element.
+ */
+	moveq	#0,%d0
+	jbsr	Lget_bi_record
+	addw	#PAGESIZE-1,%a0
+	movel	%a0,%d0
+	andl	#-PAGESIZE,%d0
+	movel	%d0,%a6
+
+/*
+ * %a6 now contains the address to the 
+ * next free block beyond the kernel
+ */
 
-	putc	'\n'
-	putc	'A'
-	dputn	%pc@(L(cputype))
-	dputn	%pc@(SYMBOL_NAME(m68k_supervisor_cachemode))
-	dputn	%pc@(SYMBOL_NAME(m68k_pgtable_cachemode))
-	dputc	'\n'
+	putc_trace('B')
 
 /*
  * Save physical start address of kernel
  */
-	lea	%pc@(L(phys_kernel_start)),%a0
-	lea	%pc@(SYMBOL_NAME(_stext)),%a1
-	subl	#SYMBOL_NAME(_stext),%a1
-	movel	%a1,%a0@
+	lea	%pc@(SYMBOL_NAME(_stext)-PAGESIZE:w),%a0
+	movel	%a0,%d5
 
-	putc	'B'
+	putc_trace('C')
 
-	leds	0x4
+	leds(0x4)
 
 /*
  *	mmu_init
- *
+ *	
  *	This block of code does what's necessary to map in the various kinds
- *	of machines for execution of Linux.
- *	First map the first 4 MB of kernel code & data
+ *	of machines for execution of Linux.  First, it's clear there
+ *	has to be a root table, so that is cleared.  Then, the kernel
+ *	has to be mapped, so the kernel is mapped low.  Then, it's on
+ *	to machine specific code where specific address ranges are
+ *	mapped depending on current I/O configurations.
+ *	
+ *	Begin:
+ *		%a6 is an input to this routine.  %a6 must point to
+ *			the first available byte of memory on a virgin page.
+ *		%d5 is the start of the kernel's physical address
+ *	
+ *	End:
+ *		%a5 will point to the root table.
+ *		%a6 will (likely) have been modified during this
+ *			call.
+ *	
+ *	
+ *	
  */
+mmu_init:
 
-	mmu_map	#0,%pc@(L(phys_kernel_start)),#4*1024*1024,\
-		%pc@(SYMBOL_NAME(m68k_supervisor_cachemode))
+	putc_trace('D')
+	
+/*
+ * initialize the kernel root table.
+ */
+	lea	%pc@(SYMBOL_NAME(kernel_pg_dir)),%a5
+	jbsr	mmu_clear_root_table
+	
+	putc_trace('E')
+	
+	lea	%pc@(SYMBOL_NAME(Lavail_pmd_table)),%a4
+	jbsr	mmu_clear_pointer_table
+
+	putc_trace('F')
+	
+/*
+ * map the first 4 MB of kernel code & data
+ */
+
+#if defined(CONFIG_060_WRITETHROUGH)
+	movel	%pc@(m68k_supervisor_cachemode),%d2
+	addil	#_PAGE_GLOBAL040+_PAGE_ACCESSED,%d2
+	MMU_MAP(#0,%d5,#4*1024*1024,%d2)
+#else
+	MMU_MAP(#0,%d5,#4*1024*1024,#_PAGE_GLOBAL040+_PAGE_CACHE040+_PAGE_ACCESSED)
+#endif
 
-	putc	'C'
+	/* Clear the pointer table to use, and page table to use */
+	moveq	#0,%d0
+	movel	%d0,%a4
+	movel	%d0,%a3
 
-#ifdef CONFIG_AMIGA
+	putc_trace('G')
 
-L(mmu_init_amiga):
+#if defined(CONFIG_AMIGA)
 
-	is_not_amiga(L(mmu_init_not_amiga))
-/*
+mmu_init_amiga:
+	
+	is_not_amiga(mmu_init_not_amiga)
+/* 
  * mmu_init_amiga
  */
 
-	putc	'D'
+	putc_trace('H')
 
 	is_not_040_or_060(1f)
 
 	/*
 	 * 040: Map the 16Meg range physical 0x0 upto logical 0x8000.0000
 	 */
-	mmu_map	#0x80000000,#0,#0x01000000,#_PAGE_NOCACHE_S
-
-	jbra	L(mmu_init_done)
+	MMU_MAP(#0x80000000,#0,#0x01000000,#_PAGE_GLOBAL040+_PAGE_NOCACHE_S+_PAGE_ACCESSED)
+	
+	jbra	mmu_init_done
 
-1:
+1:	
 	/*
 	 * 030:	Map the 32Meg range physical 0x0 upto logical 0x8000.0000
 	 */
-	mmu_map	#0x80000000,#0,#0x02000000,#_PAGE_NOCACHE030
-	mmu_map_tt	1,#0xf8000000,#0x08000000,#_PAGE_NOCACHE_S
-
-	jbra	L(mmu_init_done)
-
-L(mmu_init_not_amiga):
+	MMU_MAP(#0x80000000,#0,#0x02000000,#_PAGE_NOCACHE030+_PAGE_ACCESSED)
+	
+	jbra	mmu_init_done
+	
+mmu_init_not_amiga:
 #endif
 
-#ifdef CONFIG_ATARI
-
-L(mmu_init_atari):
-
-	is_not_atari(L(mmu_init_not_atari))
+#if defined(CONFIG_ATARI)
 
-	putc	'E'
+mmu_init_atari:
 
+	is_not_atari(mmu_init_not_atari)
+	
+	putc_trace('I')
+	
 /* On the Atari, we map the I/O region (phys. 0x00ffxxxx) by mapping
    the last 16 MB of virtual address space to the first 16 MB (i.e.
    0xffxxxxxx -> 0x00xxxxxx). For this, an additional pointer table is
@@ -906,49 +952,53 @@
 2:	movel	#0xff000000,%d0 /* Medusa/Hades base addr: 0xff000000 */
 1:	movel	%d0,%d3
 
-	is_040_or_060(L(spata68040))
+	is_040_or_060(Lspata68040)
 
 	/* Map everything non-cacheable, though not all parts really
-	 * need to disable caches (crucial only for 0xff8000..0xffffff
+	 * need to disable caches (crucial only for 0xffc000..0xffffff
 	 * (standard I/O) and 0xf00000..0xf3ffff (IDE)). The remainder
 	 * isn't really used, except for sometimes peeking into the
 	 * ROMs (mirror at phys. 0x0), so caching isn't necessary for
 	 * this. */
-	mmu_map	#0xff000000,%d3,#0x01000000,#_PAGE_NOCACHE030
-
-	jbra	L(mmu_init_done)
-
-L(spata68040):
+	MMU_MAP(#0xff000000,%d3,#0x01000000,#_PAGE_NOCACHE030+_PAGE_ACCESSED)
 
-	mmu_map	#0xff000000,%d3,#0x01000000,#_PAGE_NOCACHE_S
+	jbra	mmu_init_done
+	
+Lspata68040:
+	
+	MMU_MAP(#0xff000000,%d3,#0x01000000,#_PAGE_GLOBAL040+_PAGE_NOCACHE_S+_PAGE_ACCESSED)
+	
+	jbra	mmu_init_done
 
-	jbra	L(mmu_init_done)
-
-L(mmu_init_not_atari):
+mmu_init_not_atari:
 #endif
 
 #ifdef CONFIG_HP300
-	is_not_hp300(L(nothp300))
+	is_not_hp300(Lnothp300)
 
 /* On the HP300, we map the ROM, INTIO and DIO regions (phys. 0x00xxxxxx)
-   by mapping 32MB from 0xf0xxxxxx -> 0x00xxxxxx) using an 030 early
-   termination page descriptor.  The ROM mapping is needed because the LEDs
+   by mapping 32MB from 0xf0xxxxxx -> 0x00xxxxxx) using an 030 early 
+   termination page descriptor.  The ROM mapping is needed because the LEDs 
    are mapped there too.  */
 
-	mmu_map	#0xf0000000,#0,#0x02000000,#_PAGE_NOCACHE030
-
-L(nothp300):
+	MMU_MAP(#0xf0000000,#0,#0x01000000,#_PAGE_NOCACHE030+_PAGE_ACCESSED)
 
+#if 0
+	movel	#_PAGE_NOCACHE030+_PAGE_PRESENT+_PAGE_ACCESSED,%d0
+	movel	%d0,%a5@(0x78<<2)
 #endif
 
-#ifdef CONFIG_MVME16x
+Lnothp300:
+
+#endif
 
-	is_not_mvme16x(L(not16x))
+#if defined(CONFIG_MVME16x)
+	
+	is_not_mvme16x(Lnot16x)
 
 	/* Get pointer to board ID data */
 	movel	%d2,%sp@-
-	trap	#15
-	.word	0x70		/* trap 0x70 - .BRD_ID */
+	.long	0x4e4f0070		/* trap 0x70 - .BRD_ID */
 	movel	%sp@+,%d2
 	lea	%pc@(SYMBOL_NAME(mvme_bdid_ptr)),%a0
 	movel	%d2,%a0@
@@ -966,16 +1016,16 @@
 	 * 0xffe00000->0xffe1ffff.
 	 */
 
-	mmu_map_tt	1,#0xe0000000,#0x20000000,#_PAGE_NOCACHE_S
-
-	jbra	L(mmu_init_done)
-
-L(not16x):
+	MMU_MAP_TT(#0xe0000000,#0x20000000,#_PAGE_NOCACHE_S+_PAGE_ACCESSED)
+	
+	jbra	mmu_init_done
+	
+Lnot16x:
 #endif	/* CONFIG_MVME162 | CONFIG_MVME167 */
 
-#ifdef CONFIG_BVME6000
-
-	is_not_bvme6000(L(not6000))
+#if defined(CONFIG_BVME6000)
+	
+	is_not_bvme6000(Lnot6000)
 
 	/*
 	 * On BVME6000 we have already created kernel page tables for
@@ -986,23 +1036,23 @@
 	 * clash with User code virtual address space.
 	 */
 
-	mmu_map_tt	1,#0xe0000000,#0x20000000,#_PAGE_NOCACHE_S
-
-	jbra	L(mmu_init_done)
-
-L(not6000):
+	MMU_MAP_TT(#0xe0000000,#0x20000000,#_PAGE_NOCACHE_S+_PAGE_ACCESSED)
+	
+	jbra	mmu_init_done
+	
+Lnot6000:
 #endif /* CONFIG_BVME6000 */
 
-/*
+/* 
  * mmu_init_mac
- *
+ * 
  * The Macintosh mappings are less clear.
- *
+ * 
  * Even as of this writing, it is unclear how the
  * Macintosh mappings will be done.  However, as
  * the first author of this code I'm proposing the
  * following model:
- *
+ * 
  * Map the kernel (that's already done),
  * Map the I/O (on most machines that's the
  * 0x5000.0000 ... 0x5200.0000 range,
@@ -1014,114 +1064,198 @@
  *
  * By the way, if the frame buffer is at 0x0000.0000
  * then the Macintosh is known as an RBV based Mac.
- *
+ * 
  * By the way 2, the code currently maps in a bunch of
  * regions.  But I'd like to cut that out.  (And move most
  * of the mappings up into the kernel proper ... or only
  * map what's necessary.)
  */
 
-#ifdef CONFIG_MAC
-
-L(mmu_init_mac):
-
-	is_not_mac(L(mmu_init_not_mac))
-
-	putc	'F'
+#if defined(CONFIG_MAC)
 
-	lea	%pc@(L(mac_videobase)),%a0
-	lea	%pc@(L(console_video_virtual)),%a1
+mmu_init_mac:
+	
+	is_not_mac(mmu_init_not_mac)
+
+	putc_trace('J')
+	
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
+	lea	%pc@(Lconsole_video_virtual),%a1
 	movel	%a0@,%a1@
 
 	is_not_040_or_060(1f)
-
-	moveq	#_PAGE_NOCACHE_S,%d3
+	
+	movel	#_PAGE_GLOBAL040+_PAGE_NOCACHE_S+_PAGE_ACCESSED,MAP_CACHE
 	jbra	2f
-1:
-	moveq	#_PAGE_NOCACHE030,%d3
-2:
+1:		
+	movel	#_PAGE_NOCACHE030+_PAGE_ACCESSED,MAP_CACHE
+2:	
 	/*
 	 * Mac Note: screen address of logical 0xF000.0000 -> <screen physical>
-	 *	     we simply map the 4MB that contains the videomem
 	 */
 
-	movel	#VIDEOMEMMASK,%d0
-	andl	L(mac_videobase),%d0
-
-	mmu_map		#VIDEOMEMBASE,%d0,#VIDEOMEMSIZE,%d3
-	mmu_map_eq	#0x40800000,#0x02000000,%d3	/* rom ? */
-	mmu_map_eq	#0x50000000,#0x02000000,%d3
-	mmu_map_eq	#0x60000000,#0x00400000,%d3
-	mmu_map_eq	#0x9c000000,#0x00400000,%d3
-	mmu_map_tt	1,#0xf8000000,#0x08000000,%d3
-
-	jbra	L(mmu_init_done)
-
-L(mmu_init_not_mac):
-#endif
-
-L(mmu_init_done):
+	/* Calculate screen size */
+	clrl	%d0
+	lea	%pc@(SYMBOL_NAME(mac_dimensions)),%a0
+	movew	%a0@,%d0 		/* d0 = screen height in pixels */
 
-	putc	'G'
-	leds	0x8
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
+	mulul	%a0@,%d0		/* scan line bytes x num scan lines */
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
+	movel	%a0@,%d2		/* grab screen offset from start of a page */
+	andl	#PAGESIZE-1,%d2		/* ... offset from start of page ... */
+	addl	%d2,%d0			/* add it to N bytes needed for screen for mapping purposes! */
+	addl	#PAGESIZE-1,%d0		/* Round up to page alignment */
+	andl	#-PAGESIZE,%d0		/* d0 is now the number of 4K pages for the screen */
+
+	movel	%a0@,%d2
+	andl	#PAGESIZE-1,%d2
+	addl	#0xF0000000,%d2
+	lea	%pc@(Lconsole_video_virtual),%a1
+	movel	%d2,%a1@		/* Update the console_video address */
+	movel	%a0@,%d2
+	andl	#-PAGESIZE,%d2
+
+	MMU_MAP(#0xF0000000,%d2,MAP_LENGTH,MAP_CACHE)
+	MMU_MAP_EQ(#0x40800000,#0x02000000,MAP_CACHE)	/* ROM ? */
+	MMU_MAP_EQ(#0x50000000,#0x02000000,MAP_CACHE)
+	MMU_MAP_EQ(#0x60000000,#0x00400000,MAP_CACHE)
+	MMU_MAP_EQ(#0x9C000000,#0x00400000,MAP_CACHE)
+	MMU_MAP_TT(#0xF8000000,#0x08000000,MAP_CACHE)
+	
+	jbra	mmu_init_done
+	
+mmu_init_not_mac:
+#endif
+
+mmu_init_done:		
+	
+	putc_trace('K')
+	leds(0x8)
 
 /*
  * mmu_fixup
- *
+ * 
  * On the 040 class machines, all pages that are used for the
  * mmu have to be fixed up. According to Motorola, pages holding mmu
  * tables should be non-cacheable on a '040 and write-through on a
  * '060. But analysis of the reasons for this, and practical
  * experience, showed that write-through also works on a '040.
  *
- * Allocated memory so far goes from kernel_end to memory_start that
- * is used for all kind of tables, for that the cache attributes
- * are now fixed.
+ * So, we'll walk through the MMU table to determine which pages were
+ * allocated.  An alternative would be to "know" what pages were
+ * allocated above.  But that's fraught with maintenance problems.
+ * It's easier to walk the table.
+ * 
  */
-L(mmu_fixup):
+#if defined(CONFIG_M68040) || defined(CONFIG_M68060)
 
-	is_not_040_or_060(L(mmu_fixup_done))
+mmu_fixup:
+	
+	is_not_040_or_060(mmu_fixup_done)
+	
+#if defined(MMU_NOCACHE_KERNEL)
+	jbra	mmu_fixup_done
+#endif
+	
+	moveml	%d0-%d5/%a0,%sp@-
+	
+	movel	%a5,%a0
+	jbsr	mmu_fixup_page_mmu_cache
+	
+	movel	#ROOT_TABLE_SIZE-1,%d5
+1:	
+	movel	%a5@(%d5*4),%d2
+	movel	%d2,%d0
+	andb	#_PAGE_TABLE,%d0
+	jbeq	4f
+
+	movel	%d2,%a0
+	jbsr	mmu_fixup_page_mmu_cache
+	
+	andl	#_TABLE_MASK,%d2
+	movel	%d2,%a4
+	movel	#PTR_TABLE_SIZE-1,%d4
 
-#ifdef MMU_NOCACHE_KERNEL
-	jbra	L(mmu_fixup_done)
-#endif
+2:
+	movel	%a4@(%d4*4),%d2
+	movel	%d2,%d0
+	andb	#_PAGE_TABLE,%d0
+	jbeq	3f
+
+	movel	%d2,%a0
+	jbsr	mmu_fixup_page_mmu_cache
+	
+3:
+	dbra	%d4,2b
 
-	/* first fix the page at the start of the kernel, that
-         * contains also kernel_pg_dir.
-	 */
-	movel	%pc@(L(phys_kernel_start)),%d0
-	lea	%pc@(SYMBOL_NAME(_stext)),%a0
-	subl	%d0,%a0
-	mmu_fixup_page_mmu_cache	%a0
+4:
+	dbra	%d5,1b
 
-	movel	%pc@(L(kernel_end)),%a0
-	subl	%d0,%a0
-	movel	%pc@(L(memory_start)),%a1
-	subl	%d0,%a1
-	bra	2f
-1:
-	mmu_fixup_page_mmu_cache	%a0
-	addw	#PAGESIZE,%a0
-2:
-	cmpl	%a0,%a1
-	jgt	1b
+	moveml	%sp@+,%d0-%d5/%a0
 
-L(mmu_fixup_done):
+	jbra	mmu_fixup_done
 
-#ifdef MMU_PRINT
-	mmu_print
-#endif
+mmu_fixup_page_mmu_cache:
+	moveml	%a0/%d0-%d5,%sp@-
 
-/*
- * mmu_engage
- *
- * This chunk of code performs the gruesome task of engaging the MMU.
- * The reason its gruesome is because when the MMU becomes engaged it
- * maps logical addresses to physical addresses.  The Program Counter
- * register is then passed through the MMU before the next instruction
- * is fetched (the instruction following the engage MMU instruction).
- * This may mean one of two things:
- * 1. The Program Counter falls within the logical address space of
+	/* Calculate the offset in the root table 
+	 */
+	movel	%a0,%d5
+	andil	#0xfe000000,%d5
+	roll	#7,%d5
+
+	/* Calculate the offset in the pointer table 
+	 */
+	movel	%a0,%d4
+	andil	#0x01fc0000,%d4
+	lsrl	#2,%d4
+	swap	%d4
+	
+	/* Calculate the offset in the page table
+	 */
+	movel	%a0,%d3
+	andil	#0x0003f000,%d3
+	lsll	#4,%d3
+	swap	%d3
+
+	/*
+	 * Find the page table entry (PTE) for the page
+	 */
+	movel	%a5@(%d5*4),%d0
+	andil	#_TABLE_MASK,%d0
+	movel	%d0,%a0
+	movel	%a0@(%d4*4),%d0
+	andil	#0xffffff00,%d0
+	movel	%d0,%a0
+	movel	%a0@(%d3*4),%d0
+	/*
+	 * Set cache mode to cacheable write through
+	 */
+	andil	#_CACHEMASK040,%d0
+	orl	%pc@(SYMBOL_NAME(m68k_pgtable_cachemode)),%d0
+	movel	%d0,%a0@(%d3*4)
+	
+	moveml	%sp@+,%a0/%d0-%d5
+	rts
+	
+mmu_fixup_done:
+#endif /* CONFIG_M68040 && CONFIG_M68060 */
+	
+#if defined(MMU_PRINT)
+	jbsr	mmu_print
+#endif
+
+/* 
+ * mmu_engage
+ * 
+ * This chunk of code performs the gruesome task of engaging the MMU.
+ * The reason its gruesome is because when the MMU becomes engaged it
+ * maps logical addresses to physical addresses.  The Program Counter
+ * register is then passed through the MMU before the next instruction
+ * is fetched (the instruction following the engage MMU instruction).
+ * This may mean one of two things:
+ * 1. The Program Counter falls within the logical address space of
  *    the kernel of which there are two sub-possibilities:
  *    A. The PC maps to the correct instruction (logical PC == physical
  *       code location), or
@@ -1138,14 +1272,14 @@
  * A. The kernel is located at physical memory addressed the same as
  *    the logical memory for the kernel, i.e., 0x01000.
  * B. The kernel is located some where else.  e.g., 0x0400.0000
- *
+ * 
  *    Under some conditions the Macintosh can look like A or B.
  * [A friend and I once noted that Apple hardware engineers should be
  * wacked twice each day: once when they show up at work (as in, Whack!,
  * "This is for the screwy hardware we know you're going to design today."),
  * and also at the end of the day (as in, Whack! "I don't know what
  * you designed today, but I'm sure it wasn't good."). -- rst]
- *
+ * 
  * This code works on the following premise:
  * If the kernel start (%d5) is within the first 16 Meg of RAM,
  * then create a mapping for the kernel at logical 0x8000.0000 to
@@ -1170,85 +1304,291 @@
  * do nothing).
  *
  * Let's do it.
- *
- *
+ * 
+ * 
  */
 
-	putc	'H'
-
-	mmu_engage
-
-#ifdef CONFIG_AMIGA
+	putc_trace('L')
+	
+#if defined(CONFIG_MAC)
+	is_not_mac(1f)
+	lea	%pc@(Lconsole_video_virtual),%a1
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a3
+	movel	%a1@,%a3@
+1:
+#endif
+	
+#if defined(CONFIG_AMIGA)
 	is_not_amiga(1f)
 	/* fixup the Amiga custom register location before printing */
-	clrl	L(custom)
-1:
+	lea	%pc@(Lcustom),%a0
+	movel	#0x80000000,%a0@
+1:	
 #endif
 
-#ifdef CONFIG_ATARI
+#if defined(CONFIG_ATARI)
 	is_not_atari(1f)
 	/* fixup the Atari iobase register location before printing */
-	movel	#0xff000000,L(iobase)
+	lea	%pc@(Liobase),%a0
+	movel	#0xff000000,%a0@
+1:	
+#endif
+	
+#if defined(CONFIG_HP300)
+	is_not_hp300(1f)
+	/*
+	 * Fix up the custom register to point to the new location of the LEDs.
+	 */
+	lea	%pc@(Lcustom),%a1
+	movel	#0xf0000000,%a1@
+1:	
+#endif
+	
+	/*
+	 * Test for the simplest case:
+	 *    _start == 0x01000 
+	 *       %d5 == 0x00000
+	 */
+	lea	mmu_engage_done:w,%a0
+	tstl	%d5
+	jbeq	mmu_engage_core
+
+	/*
+	 * Prepare a transparent translation register
+	 * for the region the kernel is in.
+	 */
+	movel	%d5,%d0
+	andl	#0xff000000,%d0
+
+#if defined(CONFIG_M68040) || defined(CONFIG_M68060)
+	is_not_040_or_060(1f)
+	lea	mmu_engage_040_disable_itt0,%a0
+	
+	orw	#TTR_ENABLE+TTR_KERNELMODE+_PAGE_NOCACHE_S,%d0
+	.chip	68040
+	movec	%d0,%itt0
+	.chip	68k
+	jbra	2f
 1:
 #endif
+#if defined(CONFIG_M68020) || defined(CONFIG_M68030)
+	lea	mmu_engage_030_disable_tt0,%a0
+	
+	orw	#TTR_ENABLE+TTR_CI+TTR_RWM+TTR_FCB2+TTR_FCM1+TTR_FCM0,%d0
+	lea	%pc@(Lmmu),%a3
+	movel	%d0,%a3@
+	.chip	68030
+	pmove	%a3@,%tt0
+	.chip	68k
+#endif
 
-#ifdef CONFIG_MAC
-	is_not_mac(1f)
-	movel	#~VIDEOMEMMASK,%d0
-	andl	L(mac_videobase),%d0
-	addl	#VIDEOMEMBASE,%d0
-	movel	%d0,L(mac_videobase)
+	/*
+	 * Test for being able to use just a
+	 * transparent translation register:
+	 *    _start >= 0x0100.1000
+	 *       %d5 >= 0x0100.0000
+	 */
+2:
+	movel	%d5,%d0
+	andil	#0xff000000,%d0
+	jbne	mmu_engage_core
+
+	/*
+	 * I hate this case: 
+	 *   %d5 > 0, and
+	 *   %d5 < 0x0100.0000
+	 * Here's where we have to create a temporary mapping
+	 * at 0x8000.0000 and do all the right stuff magic.
+	 * This bites.  -- rst
+	 */
+
+#define TMP_MAP		0x80000000
+#define TMP_MAP_OFFS	(TMP_MAP>>(ROOT_INDEX_SHIFT-2))
+
+	/*
+	 * Build a really small Ptr table at %d5
+	 */
+	movel	%d5,%a4
+	jbsr	mmu_clear_pointer_table
+	
+	movel	%d5,%d0
+	addl	#0x100,%d0
+	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d0
+	movel	%d0,%a4@
+	
+	/*
+	 * Build a really small Page table at %d5 + 0x100
+	 * (Maps the first 16K of the kernel @ 0x8000.0000)
+	 */
+	movel	%d5,%a3
+	addaw	#0x100,%a3
+	jbsr	mmu_clear_page_table
+	
+	movel	#PAGESIZE,%d1
+	movel	%d5,%d0
+	orw	#_PAGE_PRESENT+_PAGE_ACCESSED,%d0
+	movel	%d0,%a3@+
+	addl	%d1,%d0
+	movel	%d0,%a3@+
+	addl	%d1,%d0
+	movel	%d0,%a3@+
+	addl	%d1,%d0
+	movel	%d0,%a3@
+
+	/*
+	 * Alter the Root table to use our really small entries
+	 */
+	lea	%a5@(TMP_MAP_OFFS),%a0
+	movel	%a0@,%d2		/* save entry */
+	movel	%d5,%d0
+	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d0
+	movel	%d0,%a0@		/* insert temp. entry */
+
+#if defined(CONFIG_M68040) || defined(CONFIG_M68060)
+	is_not_040_or_060(1f)
+	lea	mmu_engage_040_disable_8000,%a0
+	addal	#TMP_MAP,%a0
+	jbra	mmu_engage_core
 1:
 #endif
+#if defined(CONFIG_M68020) || defined(CONFIG_M68030)
+	lea	mmu_engage_030_disable_8000,%a0
+	addal	#TMP_MAP,%a0
+#endif
+	
+mmu_engage_core:
+	
+#if defined(CONFIG_M68040) || defined(CONFIG_M68060)
+	is_not_040_or_060(2f)
 
-#ifdef CONFIG_HP300
-	is_not_hp300(1f)
+mmu_engage_040:
+	.chip	68040
+	nop
+	cinva	%bc
+	nop
+	pflusha
+	nop
+	movec	%a5,%srp
+	movec	%a5,%urp
+	movel	#TC_ENABLE+TC_PAGE4K,%d0
+	movec	%d0,%tc		/* enable the MMU */
+	lea	SYMBOL_NAME(kernel_pg_dir),%a5
+	jmp	%a0@		/* Go to clean up code */
+
+mmu_engage_040_disable_itt0:
+	moveq	#0,%d0
+	movec	%d0,%itt0
+	jmp	mmu_engage_done:w
+
+mmu_engage_040_disable_8000:
 	/*
-	 * Fix up the custom register to point to the new location of the LEDs.
+	 * This code is running at 0x8000.0000+ right now
 	 */
-	movel	#0xf0000000,L(custom)
+	moveq	#0,%d0
+	movec	%d0,%itt0
+	jmp	1f:w	/* Jump down into logical space into the kernel! */
+1:
+	/* Now we're back on the ground! */
+	movel	%d2,%a5@(TMP_MAP_OFFS) /* Restore the old 0x8000.0000 mapping */
+	nop
+	pflusha
+	nop
+	jmp	mmu_engage_done:w
+	.chip	68k
 
+2:
+#endif
+
+#if defined(CONFIG_M68020) || defined(CONFIG_M68030)
+mmu_engage_030:
+	.chip	68030
+	lea	%pc@(Lmmu),%a3
+	movel	#0x80000002,%a3@
+	movel	%a5,%a3@(4)
+	movel	#0x0808,%d1
+	movec	%d1,%cacr
+	pmove	%a3@,%srp
+	pmove	%a3@,%crp
+	pflusha
 	/*
-	 * Energise the FPU and caches.
+	 * enable,super root enable,4096 byte pages,7 bit root index,
+	 * 7 bit pointer index, 6 bit page table index.
 	 */
-	movel	#0x60,0xf05f400c
+	movel	#0x82c07760,%a3@
+	pmove	%a3@,%tc	/* enable the MMU */
+	lea	SYMBOL_NAME(kernel_pg_dir),%a5
+	jmp	%a0@		/* Go to the appropriate clean up code */
+
+mmu_engage_030_disable_tt0:
+	clrl	%a3@
+	pmove	%a3@,%tt0
+	jmp	mmu_engage_done:w
+	
+mmu_engage_030_disable_8000:
+	clrl	%a3@
+	pmove	%a3@,%tt0
+	jmp	1f:w	/* Jump down into logical space into the kernel! */
 1:
+	/* Now we're back on the ground! */
+	movel	%d2,%a5@(TMP_MAP_OFFS)	/* Restore the old 0x8000.0000 mapping */
+	pflusha
+	jmp	mmu_engage_done:w
+	.chip	68k
 #endif
+	
+mmu_engage_done:
 
+#if defined(CONFIG_HP300)
+	is_not_hp300(1f)
+	/*
+	 * Energise the FPU and caches.
+	 */
+	movel	#0x60, 0xf05f400c
+1:	
+#endif
+	movew	#PAGESIZE,%sp
+	
 /*
  * Fixup the addresses for the kernel pointer table and availmem.
  * Convert them from physical addresses to virtual addresses.
  */
 
-	putc	'I'
-	leds	0x10
+	putc_trace('M')
+	leds(0x10)
+
+	/* d5 contains physaddr of kernel start
+	 */
+	lea	SYMBOL_NAME(kpt),%a0
+	subl	%d5,%a0@
 
 	/* do the same conversion on the first available memory
 	 * address (in a6).
 	 */
-	movel	L(memory_start),%d0
-	movel	%d0,SYMBOL_NAME(availmem)
-
+	lea	SYMBOL_NAME(availmem),%a0
+	subl	%d5,%a6
+	movel	%a6,%a0@
+	
 /*
  * Enable caches
  */
 
-	is_not_040_or_060(L(cache_not_680460))
+#if defined(CONFIG_M68040) || defined(CONFIG_M68060)
+	is_not_040_or_060(Lcache_not_680460)
 
-L(cache680460):
+Lcache680460:
 	.chip	68040
 	nop
 	cpusha	%bc
 	nop
-
-	is_060(L(cache68060))
+	
+	is_060(Lcache68060)
 
 	movel	#CC6_ENABLE_D+CC6_ENABLE_I,%d0
 	/* MMU stuff works in copyback mode now, so enable the cache */
 	movec	%d0,%cacr
-	jra	L(cache_done)
+	jra	Lcache_done
 
-L(cache68060):
+Lcache68060:
 	movel	#CC6_ENABLE_D+CC6_ENABLE_I+CC6_ENABLE_SB+CC6_PUSH_DPI+CC6_ENABLE_B+CC6_CLRA_B,%d0
 	/* MMU stuff works in copyback mode now, so enable the cache */
 	movec	%d0,%cacr
@@ -1257,19 +1597,56 @@
 	.chip	68060
 	movec	%d0,%pcr
 
-	jbra	L(cache_done)
-L(cache_not_680460):
-L(cache68030):
+	jbra	Lcache_done
+Lcache_not_680460:
+#endif
+#if defined(CONFIG_M68020) || defined(CONFIG_M68030)
+Lcache68030:
 	.chip	68030
 	movel	#CC3_ENABLE_DB+CC3_CLR_D+CC3_ENABLE_D+CC3_ENABLE_IB+CC3_CLR_I+CC3_ENABLE_I,%d0
 	movec	%d0,%cacr
 
-	jra	L(cache_done)
+	jra	Lcache_done
+#endif
 	.chip	68k
-L(cache_done):
+Lcache_done:
+
+	putc_trace('P')
 
-	putc	'J'
+#if defined(MMU_PRINT_PAGE_USAGE)
+	/*
+	 * Print out the number of pages used by MMU above the kernel
+	 */
+	puts("MMU #")
+	lea	%pc@(SYMBOL_NAME(_end)),%a0
+	addw	#PAGESIZE-1,%a0
+	movel	%a0,%d0
+	andl	#-PAGESIZE,%d0
+	movel	%a6,%d1
+	subl	%d0,%d1		/* d1 :	= d1 - d0 */
+	putn(%d1)
+	putr()
+
+	puts("Page #")
+	putn(%pc@(Lmmu_num_page_tables))
+	putr()
+	
+
+	puts("Ptr #")
+	putn(%pc@(Lmmu_num_pointer_tables))
+	putr()
+
+	puts("Total #")
+	movel	%pc@(Lmmu_num_page_tables),%d0
+	addl	%pc@(Lmmu_num_pointer_tables),%d0
+	putn(%d0)
+	putr()
 
+	puts("Halting.")	
+1:
+	jbra	1b
+#endif /* MMU_PRINT_PAGE_USAGE */
+	
 /*
  * Setup initial stack pointer
  */
@@ -1277,8 +1654,31 @@
 	lea	0x2000(%a2),%sp
 
 /* jump to the kernel start */
-	putc	'\n'
-	leds	0x55
+	putr()
+	leds(0x55)
+
+#if defined(DEBUG)
+	puts("     kpt:")
+	lea	%pc@(SYMBOL_NAME(kpt)),%a0
+	movel	%a0,%d7		/* get start addr. */
+	jbsr	Lserial_putnum
+	putr()
+
+	puts("    *kpt:")
+	lea	%pc@(SYMBOL_NAME(kpt)),%a0
+	movel	%a0@,%d7	/* get start addr. */
+	jbsr	Lserial_putnum
+	putr()
+#endif
+
+#if 0
+	movel	#0xFFFF,%d1
+2:
+	movel	#0xFFFFFFFF,%d0
+1:
+	dbra	%d0,1b
+	dbra	%d1,2b
+#endif
 
 	subl	%a6,%a6		/* clear a6 for gdb */
 	jbsr	SYMBOL_NAME(start_kernel)
@@ -1289,24 +1689,21 @@
  * Returns: d0: size (-1 if not found)
  *          a0: data pointer (end-of-records if not found)
  */
-func_start	get_bi_record,%d1
-
-	movel	ARG1,%d0
+Lget_bi_record:
 	lea	%pc@(SYMBOL_NAME(_end)),%a0
-1:	tstw	%a0@(BIR_TAG)
+1:	tstw	%a0@(BIR_tag)
 	jeq	3f
-	cmpw	%a0@(BIR_TAG),%d0
+	cmpw	%a0@(BIR_tag),%d0
 	jeq	2f
-	addw	%a0@(BIR_SIZE),%a0
+	addw	%a0@(BIR_size),%a0
 	jra	1b
 2:	moveq	#0,%d0
-	movew	%a0@(BIR_SIZE),%d0
-	lea	%a0@(BIR_DATA),%a0
-	jra	4f
+	movew	%a0@(BIR_size),%d0
+	lea	%a0@(BIR_data),%a0
+	rts
 3:	moveq	#-1,%d0
-	lea	%a0@(BIR_SIZE),%a0
-4:
-func_return	get_bi_record
+	lea	%a0@(BIR_size),%a0
+	rts
 
 
 /*
@@ -1356,7 +1753,7 @@
  *		bits 25..18 - index into the Pointer Table
  *		bits 17..12 - index into the Page Table
  *		bits 11..0  - offset into a particular 4K page
- *
+ *	
  *	The algorithms which follows do one thing: they abstract
  *	the MMU hardware.  For example, there are three kinds of
  *	cache settings that are relevant.  Either, memory is
@@ -1366,17 +1763,17 @@
  *	in which case it has its own kind of cache bits.  There
  *	are constants which abstract these notions from the code that
  *	actually makes the call to map some range of memory.
- *
- *
- *
+ *	
+ *	
+ *	
  */
 
-#ifdef MMU_PRINT
+#if defined(MMU_PRINT)
 /*
  *	mmu_print
- *
+ *	
  *	This algorithm will print out the current MMU mappings.
- *
+ *	
  *	Input:
  *		%a5 points to the root table.  Everything else is calculated
  *			from this.
@@ -1392,21 +1789,30 @@
 #define MMU_PRINT_VALID			1
 #define MMU_PRINT_UNINITED		0
 
-#define putZc(z,n)		jbne 1f; putc z; jbra 2f; 1: putc n; 2:
+#define	putZc(z,n)		jbne 1f; putc(z); jbra 2f ; 1: putc(n); 2:
 
-func_start	mmu_print,%a0-%a6/%d0-%d7
+mmu_print:
+	moveml	%a0-%a6/%d0-%d7,%sp@-
 
-	movel	%pc@(L(kernel_pgdir_ptr)),%a5
-	lea	%pc@(L(mmu_print_data)),%a0
+	lea	%pc@(Lmmu_print_data),%a0
 	movel	#MMU_PRINT_UNINITED,%a0@(mmu_next_valid)
-
+	
 	is_not_040_or_060(mmu_030_print)
-
+	
 mmu_040_print:
-	puts	"\nMMU040\n"
-	puts	"rp:"
-	putn	%a5
-	putc	'\n'
+	putr()
+	puts("MMU040")
+	putr()
+	putr()
+	puts("rp:")
+	movel	%a5,%d7
+	jbsr	Lserial_putnum
+	putr()
+	puts("tc:")
+	movel	%d5,%d7
+	jbsr	Lserial_putnum
+	putr()
+	putr()
 #if 0
 	/*
 	 * The following #if/#endif block is a tight algorithm for dumping the 040
@@ -1414,11 +1820,8 @@
 	 * MMU Map algorithm appears to go awry and you need to debug it at the
 	 * entry per entry level.
 	 */
-	movel	#ROOT_TABLE_SIZE,%d5
-#if 0
-	movel	%a5@+,%d7		| Burn an entry to skip the kernel mappings,
-	subql	#1,%d5			| they (might) work
-#endif
+	movel	#ROOT_TABLE_SIZE-1,%d5
+	movel	%a5@+,%d7		/* Burn an entry to skip the kernel mappings, they work */
 1:	tstl	%d5
 	jbeq	mmu_print_done
 	subq	#1,%d5
@@ -1426,11 +1829,11 @@
 	btst	#1,%d7
 	jbeq	1b
 
-2:	putn	%d7
+2:	jbsr	Lserial_putnum
 	andil	#0xFFFFFE00,%d7
 	movel	%d7,%a4
 	movel	#PTR_TABLE_SIZE,%d4
-	putc	' '
+	putc(' ')
 3:	tstl	%d4
 	jbeq	11f
 	subq	#1,%d4
@@ -1438,7 +1841,7 @@
 	btst	#1,%d7
 	jbeq	3b
 
-4:	putn	%d7
+4:	jbsr	Lserial_putnum
 	andil	#0xFFFFFF00,%d7
 	movel	%d7,%a3
 	movel	#PAGE_TABLE_SIZE,%d3
@@ -1452,28 +1855,30 @@
 7:	tstl	%d2
 	jbeq	8f
 	subq	#1,%d2
-	putc	' '
+	putc(' ')
 	jbra	91f
-8:	putc	'\n'
+8:	putr()
 	movel	#8+1+8+1+1,%d2
-9:	putc	' '
+9:	putc(' ')
 	dbra	%d2,9b
 	movel	#7,%d2
-91:	putn	%d6
+91:	movel	%d6,%d7
+	jbsr	Lserial_putnum
 	jbra	6b
 
-31:	putc	'\n'
+31:	putr()
 	movel	#8+1,%d2
-32:	putc	' '
+32:	putc(' ')
 	dbra	%d2,32b
 	jbra	3b
 
-11:	putc	'\n'
+11:	putr()
 	jbra	1b
 #endif /* MMU 040 Dumping code that's gory and detailed */
-
-	lea	%pc@(SYMBOL_NAME(kernel_pg_dir)),%a5
-	movel	%a5,%a0			/* a0 has the address of the root table ptr */
+			
+	movel	%a5,%d0			/* a5 -> root table ptr */
+	andil	#0xfffffe00,%d0		/* I forget why this is here ? */
+	movel	%d0,%a0			/* a0 has the address of the root table ptr */
 	movel	#0x00000000,%a4		/* logical address */
 	moveql	#0,%d0
 40:
@@ -1485,7 +1890,7 @@
 	jbne	41f
 	jbsr	mmu_print_tuple_invalidate
 	jbra	48f
-41:
+41:	
 	movel	#0,%d1
 	andil	#0xfffffe00,%d6
 	movel	%d6,%a1
@@ -1497,11 +1902,11 @@
 	jbne	43f
 	jbsr	mmu_print_tuple_invalidate
 	jbra	47f
-43:
+43:		
 	movel	#0,%d2
 	andil	#0xffffff00,%d6
 	movel	%d6,%a2
-44:
+44:	
 	movel	%a4,%d5
 	addil	#PAGESIZE,%d5
 	movel	%a2@+,%d6
@@ -1528,42 +1933,26 @@
 	cmpib	#128,%d1
 	jbne	42b
 48:
-	movel	%d5,%a4			/* move to the next logical address */
+	movel	%d5,%a4			/* move to the next logical address */	
 	addq	#1,%d0
 	cmpib	#128,%d0
 	jbne	40b
 
-	.chip	68040
-	movec	%dtt1,%d0
-	movel	%d0,%d1
-	andiw	#0x8000,%d1		/* is it valid ? */
-	jbeq	1f			/* No, bail out */
-
-	movel	%d0,%d1
-	andil	#0xff000000,%d1		/* Get the address */
-	putn	%d1
-	puts	"=="
-	putn	%d1
-
-	movel	%d0,%d6
-	jbsr	mmu_040_print_flags_tt
-1:
-	movec	%dtt0,%d0
+	.long	0x4e7a0007		/* movec dtt1,%d0 */
 	movel	%d0,%d1
 	andiw	#0x8000,%d1		/* is it valid ? */
-	jbeq	1f			/* No, bail out */
+	jbeq	49f			/* No, bail out */
 
 	movel	%d0,%d1
 	andil	#0xff000000,%d1		/* Get the address */
-	putn	%d1
-	puts	"=="
-	putn	%d1
+	putn(%d1)
+	puts("==")
+	putn(%d1)
 
 	movel	%d0,%d6
 	jbsr	mmu_040_print_flags_tt
-1:
-	.chip	68k
-
+	
+49:
 	jbra	mmu_print_done
 
 mmu_040_print_flags:
@@ -1571,30 +1960,38 @@
 	putZc(' ','G')	/* global bit */
 	btstl	#7,%d6
 	putZc(' ','S')	/* supervisor bit */
-mmu_040_print_flags_tt:
+mmu_040_print_flags_tt:	
 	btstl	#6,%d6
 	jbne	3f
-	putc	'C'
+	putc('C')
 	btstl	#5,%d6
 	putZc('w','c')	/* write through or copy-back */
 	jbra	4f
 3:
-	putc	'N'
+	putc('N')
 	btstl	#5,%d6
 	putZc('s',' ')	/* serialized non-cacheable, or non-cacheable */
-4:
+4:		
 	rts
-
+	
 mmu_030_print_flags:
 	btstl	#6,%d6
 	putZc('C','I')	/* write through or copy-back */
 	rts
-
-mmu_030_print:
-	puts	"\nMMU030\n"
-	puts	"\nrp:"
-	putn	%a5
-	putc	'\n'
+	
+mmu_030_print:	
+	putr()
+	puts("rp:")
+	movel	%a5,%d7
+	jbsr	Lserial_putnum
+	putr()
+	puts("tc:")
+	movel	%d5,%d7
+	jbsr	Lserial_putnum
+	putr()
+	putr()
+	puts("MMU030")
+	putr()
 	movel	%a5,%d0
 	andil	#0xfffffff0,%d0
 	movel	%d0,%a0
@@ -1610,10 +2007,10 @@
 	jbeq	1f			/* no */
 	jbsr	mmu_030_print_helper
 	jbra	38f
-1:
+1:	
 	jbsr	mmu_print_tuple_invalidate
 	jbra	38f
-31:
+31:	
 	movel	#0,%d1
 	andil	#0xfffffff0,%d6
 	movel	%d6,%a1
@@ -1627,14 +2024,14 @@
 	jbeq	1f			/* no */
 	jbsr	mmu_030_print_helper
 	jbra	37f
-1:
+1:	
 	jbsr	mmu_print_tuple_invalidate
 	jbra	37f
-33:
+33:		
 	movel	#0,%d2
 	andil	#0xfffffff0,%d6
 	movel	%d6,%a2
-34:
+34:	
 	movel	%a4,%d5
 	addil	#PAGESIZE,%d5
 	movel	%a2@+,%d6
@@ -1655,16 +2052,17 @@
 	cmpib	#128,%d1
 	jbne	32b
 38:
-	movel	%d5,%a4			/* move to the next logical address */
+	movel	%d5,%a4			/* move to the next logical address */	
 	addq	#1,%d0
 	cmpib	#128,%d0
 	jbne	30b
 
 mmu_print_done:
-	puts	"\n\n"
-
-func_return	mmu_print
-
+	putr()
+	putr()
+	
+	moveml	%sp@+,%a0-%a6/%d0-%d7
+	rts
 
 mmu_030_print_helper:
 	moveml	%d0-%d1,%sp@-
@@ -1674,825 +2072,675 @@
 	jbsr	mmu_print_tuple
 	moveml	%sp@+,%d0-%d1
 	rts
-
+	
 mmu_print_tuple_invalidate:
 	moveml	%a0/%d7,%sp@-
 
-	lea	%pc@(L(mmu_print_data)),%a0
+	lea	%pc@(Lmmu_print_data),%a0
 	tstl	%a0@(mmu_next_valid)
 	jbmi	mmu_print_tuple_invalidate_exit
-
+	
 	movel	#MMU_PRINT_INVALID,%a0@(mmu_next_valid)
 
-	putn	%a4
-
-	puts	"##\n"
-
+	movel	%a4,%d7
+	jbsr	Lserial_putnum
+	
+	puts("##")
+	putr()
+	
 mmu_print_tuple_invalidate_exit:
 	moveml	%sp@+,%a0/%d7
 	rts
 
-
+		
 mmu_print_tuple:
 	moveml	%d0-%d7/%a0,%sp@-
 
-	lea	%pc@(L(mmu_print_data)),%a0
-
+	lea	%pc@(Lmmu_print_data),%a0
+	
 	tstl	%a0@(mmu_next_valid)
-	jble	mmu_print_tuple_print
+	jbmi	mmu_print_tuple_print
+	jbeq	mmu_print_tuple_print
+	jbpl	mmu_print_tuple_test
 
+mmu_print_tuple_test:
 	cmpl	%a0@(mmu_next_physical),%d1
 	jbeq	mmu_print_tuple_increment
-
+	
 mmu_print_tuple_print:
-	putn	%d0
-	puts	"->"
-	putn	%d1
+	movel	%d0,%d7
+	jbsr	Lserial_putnum
+	
+	puts("->")
+	
+	movel	%d1,%d7
+	jbsr	Lserial_putnum
 
 	movel	%d1,%d6
 	jbsr	%a6@
-
+	
 mmu_print_tuple_record:
 	movel	#MMU_PRINT_VALID,%a0@(mmu_next_valid)
-
+	
 	movel	%d1,%a0@(mmu_next_physical)
 
 mmu_print_tuple_increment:
 	movel	%d5,%d7
 	subl	%a4,%d7
 	addl	%d7,%a0@(mmu_next_physical)
-
-mmu_print_tuple_exit:
+	
+mmu_print_tuple_exit:	
 	moveml	%sp@+,%d0-%d7/%a0
 	rts
 
 mmu_print_machine_cpu_types:
-	puts	"machine: "
-
+	puts("machine: ")
+	
 	is_not_amiga(1f)
-	puts	"amiga"
+	puts("amiga")
 	jbra	9f
-1:
+1:	
 	is_not_atari(2f)
-	puts	"atari"
+	puts("atari")
 	jbra	9f
-2:
+2:	
 	is_not_mac(3f)
-	puts	"macintosh"
+	puts("macintosh")
 	jbra	9f
-3:	puts	"unknown"
-9:	putc	'\n'
+3:	puts("unknown")
+9:	putr()
 
-	puts	"cputype: 0"
+	puts("cputype: 0")		
 	is_not_060(1f)
-	putc	'6'
+	putc('6')
 	jbra	9f
-1:
+1:	
 	is_not_040_or_060(2f)
-	putc	'4'
+	putc('4')
 	jbra	9f
-2:	putc	'3'
-9:	putc	'0'
-	putc	'\n'
-
-	rts
-#endif /* MMU_PRINT */
-
-/*
- * mmu_map_tt
- *
- * This is a specific function which works on all 680x0 machines.
- * On 030, 040 & 060 it will attempt to use Transparent Translation
- * registers (tt1).
- * On 020 it will call the standard mmu_map which will use early
- * terminating descriptors.
- */
-func_start	mmu_map_tt,%d0/%d1/%a0,4
-
-	dputs	"mmu_map_tt:"
-	dputn	ARG1
-	dputn	ARG2
-	dputn	ARG3
-	dputn	ARG4
-	dputc	'\n'
-
-	is_020(L(do_map))
-
-	/* Extract the highest bit set
-	 */
-	bfffo	ARG3{#0,#32},%d1
-	cmpw	#8,%d0
-	jcc	L(do_map)
-
-	/* And get the mask
-	 */
-	moveq	#-1,%d0
-	lsrl	%d1,%d0
-	lsrl	#1,%d0
-
-	/* Mask the address
-	 */
-	movel	%d0,%d1
-	notl	%d1
-	andl	ARG2,%d1
-
-	/* Generate the upper 16bit of the tt register
-	 */
-	lsrl	#8,%d0
-	orl	%d0,%d1
-	clrw	%d1
-
-	is_040_or_060(L(mmu_map_tt_040))
-
-	/* set 030 specific bits (read/write access for supervisor mode
-	 * (highest function code set, lower two bits masked))
-	 */
-	orw	#TTR_ENABLE+TTR_RWM+TTR_FCB2+TTR_FCM1+TTR_FCM0,%d1
-	movel	ARG4,%d0
-	btst	#6,%d0
-	jeq	1f
-	orw	#TTR_CI,%d1
-
-1:	lea	STACK,%a0
-	dputn	%d1
-	movel	%d1,%a0@
-	.chip	68030
-	tstl	ARG1
-	jne	1f
-	pmove	%a0@,%tt0
-	jra	2f
-1:	pmove	%a0@,%tt1
-2:	.chip	68k
-	jra	L(mmu_map_tt_done)
-
-	/* set 040 specific bits
-	 */
-L(mmu_map_tt_040):
-	orw	#TTR_ENABLE+TTR_KERNELMODE,%d1
-	orl	ARG4,%d1
-	dputn	%d1
-
-	.chip	68040
-	tstl	ARG1
-	jne	1f
-	movec	%d1,%itt0
-	movec	%d1,%dtt0
-	jra	2f
-1:	movec	%d1,%itt1
-	movec	%d1,%dtt1
-2:	.chip	68k
-
-	jra	L(mmu_map_tt_done)
-
-L(do_map):
-	mmu_map_eq	ARG2,ARG3,ARG4
-
-L(mmu_map_tt_done):
-
-func_return	mmu_map_tt
-
-/*
- *	mmu_map
- *
- *	This routine will map a range of memory using a pointer
- *	table and allocating the pages on the fly from the kernel.
- *	The pointer table does not have to be already linked into
- *	the root table, this routine will do that if necessary.
- *
- *	NOTE
- *	This routine will assert failure and use the serial_putc
- *	routines in the case of a run-time error.  For example,
- *	if the address is already mapped.
- *
- *	NOTE-2
- *	This routine will use early terminating descriptors
- *	where possible for the 68020+68851 and 68030 type
- *	processors.
- */
-func_start	mmu_map,%d0-%d4/%a0-%a4
-
-	dputs	"\nmmu_map:"
-	dputn	ARG1
-	dputn	ARG2
-	dputn	ARG3
-	dputn	ARG4
-	dputc	'\n'
-
-	/* Get logical address and round it down to 256KB
-	 */
-	movel	ARG1,%d0
-	andl	#-(PAGESIZE*PAGE_TABLE_SIZE),%d0
-	movel	%d0,%a3
-
-	/* Get the end address
-	 */
-	movel	ARG1,%a4
-	addl	ARG3,%a4
-	subql	#1,%a4
-
-	/* Get physical address and round it down to 256KB
-	 */
-	movel	ARG2,%d0
-	andl	#-(PAGESIZE*PAGE_TABLE_SIZE),%d0
-	movel	%d0,%a2
-
-	/* Add page attributes to the physical address
-	 */
-	movel	ARG4,%d0
-	orw	#_PAGE_PRESENT+_PAGE_ACCESSED+_PAGE_DIRTY,%d0
-	addw	%d0,%a2
-
-	dputn	%a2
-	dputn	%a3
-	dputn	%a4
-
-	is_not_040_or_060(L(mmu_map_030))
-
-	addw	#_PAGE_GLOBAL040,%a2
-/*
- *	MMU 040 & 060 Support
- *
- *	The MMU usage for the 040 and 060 is different enough from
- *	the 030 and 68851 that there is separate code.  This comment
- *	block describes the data structures and algorithms built by
- *	this code.
- *
- *	The 040 does not support early terminating descriptors, as
- *	the 030 does.  Therefore, a third level of table is needed
- *	for the 040, and that would be the page table.  In Linux,
- *	page tables are allocated directly from the memory above the
- *	kernel.
- *
- */
-
-L(mmu_map_040):
-	/* Calculate the offset into the root table
-	 */
-	movel	%a3,%d0
-	moveq	#ROOT_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	mmu_get_root_table_entry	%d0
-
-	/* Calculate the offset into the pointer table
-	 */
-	movel	%a3,%d0
-	moveq	#PTR_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	andl	#PTR_TABLE_SIZE-1,%d0
-	mmu_get_ptr_table_entry		%a0,%d0
-
-	/* Calculate the offset into the page table
-	 */
-	movel	%a3,%d0
-	moveq	#PAGE_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	andl	#PAGE_TABLE_SIZE-1,%d0
-	mmu_get_page_table_entry	%a0,%d0
-
-	/* The page table entry must not no be busy
-	 */
-	tstl	%a0@
-	jne	L(mmu_map_error)
-
-	/* Do the mapping and advance the pointers
-	 */
-	movel	%a2,%a0@
-2:
-	addw	#PAGESIZE,%a2
-	addw	#PAGESIZE,%a3
-
-	/* Ready with mapping?
-	 */
-	lea	%a3@(-1),%a0
-	cmpl	%a0,%a4
-	jhi	L(mmu_map_040)
-	jra	L(mmu_map_done)
-
-L(mmu_map_030):
-	/* Calculate the offset into the root table
-	 */
-	movel	%a3,%d0
-	moveq	#ROOT_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	mmu_get_root_table_entry	%d0
-
-	/* Check if logical address 32MB aligned,
-	 * so we can try to map it once
-	 */
-	movel	%a3,%d0
-	andl	#(PTR_TABLE_SIZE*PAGE_TABLE_SIZE*PAGESIZE-1)&(-ROOT_TABLE_SIZE),%d0
-	jne	1f
-
-	/* Is there enough to map for 32MB at once
-	 */
-	lea	%a3@(PTR_TABLE_SIZE*PAGE_TABLE_SIZE*PAGESIZE-1),%a1
-	cmpl	%a1,%a4
-	jcs	1f
-
-	addql	#1,%a1
-
-	/* The root table entry must not no be busy
-	 */
-	tstl	%a0@
-	jne	L(mmu_map_error)
-
-	/* Do the mapping and advance the pointers
-	 */
-	dputs	"early term1"
-	dputn	%a2
-	dputn	%a3
-	dputn	%a1
-	dputc	'\n'
-	movel	%a2,%a0@
-
-	movel	%a1,%a3
-	lea	%a2@(PTR_TABLE_SIZE*PAGE_TABLE_SIZE*PAGESIZE),%a2
-	jra	L(mmu_mapnext_030)
-1:
-	/* Calculate the offset into the pointer table
-	 */
-	movel	%a3,%d0
-	moveq	#PTR_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	andl	#PTR_TABLE_SIZE-1,%d0
-	mmu_get_ptr_table_entry		%a0,%d0
-
-	/* The pointer table entry must not no be busy
-	 */
-	tstl	%a0@
-	jne	L(mmu_map_error)
-
-	/* Do the mapping and advance the pointers
-	 */
-	dputs	"early term2"
-	dputn	%a2
-	dputn	%a3
-	dputc	'\n'
-	movel	%a2,%a0@
-
-	addl	#PAGE_TABLE_SIZE*PAGESIZE,%a2
-	addl	#PAGE_TABLE_SIZE*PAGESIZE,%a3
-
-L(mmu_mapnext_030):
-	/* Ready with mapping?
-	 */
-	lea	%a3@(-1),%a0
-	cmpl	%a0,%a4
-	jhi	L(mmu_map_030)
-	jra	L(mmu_map_done)
-
-L(mmu_map_error):
-
-	dputs	"mmu_map error:"
-	dputn	%a2
-	dputn	%a3
-	dputc	'\n'
-
-L(mmu_map_done):
-
-func_return	mmu_map
-
-/*
- *	mmu_fixup
- *
- *	On the 040 class machines, all pages that are used for the
- *	mmu have to be fixed up.
- */
-
-func_start	mmu_fixup_page_mmu_cache,%d0/%a0
-
-	dputs	"mmu_fixup_page_mmu_cache"
-	dputn	ARG1
-
-	/* Calculate the offset into the root table
-	 */
-	movel	ARG1,%d0
-	moveq	#ROOT_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	mmu_get_root_table_entry	%d0
-
-	/* Calculate the offset into the pointer table
-	 */
-	movel	ARG1,%d0
-	moveq	#PTR_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	andl	#PTR_TABLE_SIZE-1,%d0
-	mmu_get_ptr_table_entry		%a0,%d0
-
-	/* Calculate the offset into the page table
-	 */
-	movel	ARG1,%d0
-	moveq	#PAGE_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	andl	#PAGE_TABLE_SIZE-1,%d0
-	mmu_get_page_table_entry	%a0,%d0
-
-	movel	%a0@,%d0
-	andil	#_CACHEMASK040,%d0
-	orl	%pc@(SYMBOL_NAME(m68k_pgtable_cachemode)),%d0
-	movel	%d0,%a0@
-
-	dputc	'\n'
-
-func_return	mmu_fixup_page_mmu_cache
-
-/*
- *	mmu_temp_map
- *
- *	create a temporary mapping to enable the mmu,
- *	this we don't need any transparation translation tricks.
- */
-
-func_start	mmu_temp_map,%d0/%d1/%a0/%a1
-
-	dputs	"mmu_temp_map"
-	dputn	ARG1
-	dputn	ARG2
-	dputc	'\n'
-
-	lea	%pc@(L(temp_mmap_mem)),%a1
-
-	/* Calculate the offset in the root table
-	 */
-	movel	ARG2,%d0
-	moveq	#ROOT_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	mmu_get_root_table_entry	%d0
-
-	/* Check if the table is temporary allocated, so we have to reuse it
-	 */
-	movel	%a0@,%d0
-	cmpl	%pc@(L(memory_start)),%d0
-	jcc	1f
-
-	/* Temporary allocate a ptr table and insert it into the root table
-	 */
-	movel	%a1@,%d0
-	addl	#PTR_TABLE_SIZE*4,%a1@
-	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d0
-	movel	%d0,%a0@
-	dputs	" (new)"
-1:
-	dputn	%d0
-	/* Mask the root table entry for the ptr table
-	 */
-	andw	#-ROOT_TABLE_SIZE,%d0
-	movel	%d0,%a0
-
-	/* Calculate the offset into the pointer table
-	 */
-	movel	ARG2,%d0
-	moveq	#PTR_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	andl	#PTR_TABLE_SIZE-1,%d0
-	lea	%a0@(%d0*4),%a0
-	dputn	%a0
-
-	/* Check if a temporary page table is already allocated
-	 */
-	movel	%a0@,%d0
-	jne	1f
-
-	/* Temporary allocate a page table and insert it into the ptr table
-	 */
-	movel	%a1@,%d0
-	addl	#PTR_TABLE_SIZE*4,%a1@
-	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d0
-	movel	%d0,%a0@
-	dputs	" (new)"
-1:
-	dputn	%d0
-	/* Mask the ptr table entry for the page table
-	 */
-	andw	#-PTR_TABLE_SIZE,%d0
-	movel	%d0,%a0
-
-	/* Calculate the offset into the page table
-	 */
-	movel	ARG2,%d0
-	moveq	#PAGE_INDEX_SHIFT,%d1
-	lsrl	%d1,%d0
-	andl	#PAGE_TABLE_SIZE-1,%d0
-	lea	%a0@(%d0*4),%a0
-	dputn	%a0
-
-	/* Insert the address into the page table
-	 */
-	movel	ARG1,%d0
-	andw	#-PAGESIZE,%d0
-	orw	#_PAGE_PRESENT+_PAGE_ACCESSED+_PAGE_DIRTY,%d0
-	movel	%d0,%a0@
-	dputn	%d0
-
-	dputc	'\n'
-
-func_return	mmu_temp_map
-
-func_start	mmu_engage,%d0-%d2/%a0-%a3
-
-	moveq	#ROOT_TABLE_SIZE-1,%d0
-	/* Temporarily use a different root table.  */
-	lea	%pc@(L(kernel_pgdir_ptr)),%a0
-	movel	%a0@,%a2
-	movel	%pc@(L(memory_start)),%a1
-	movel	%a1,%a0@
-	movel	%a2,%a0
-1:
-	movel	%a0@+,%a1@+
-	dbra	%d0,1b
-
-	lea	%pc@(L(temp_mmap_mem)),%a0
-	movel	%a1,%a0@
-
-	movew	#PAGESIZE-1,%d0
-1:
-	clrl	%a1@+
-	dbra	%d0,1b
-
-	lea	%pc@(1b),%a0
-	movel	#1b,%a1
-	/* Skip temp mappings if phys == virt */
-	cmpl	%a0,%a1
-	jeq	1f
-
-	mmu_temp_map	%a0,%a0
-	mmu_temp_map	%a0,%a1
-
-	addw	#PAGESIZE,%a0
-	addw	#PAGESIZE,%a1
-	mmu_temp_map	%a0,%a0
-	mmu_temp_map	%a0,%a1
-1:
-	movel	%pc@(L(memory_start)),%a3
-	movel	%pc@(L(phys_kernel_start)),%d2
-
-	is_not_040_or_060(L(mmu_engage_030))
-
-L(mmu_engage_040):
-	.chip	68040
-	nop
-	cinva	%bc
-	nop
-	pflusha
-	nop
-	movec	%a3,%srp
-	movel	#TC_ENABLE+TC_PAGE4K,%d0
-	movec	%d0,%tc		/* enable the MMU */
-	jmp	1f:l
-1:	nop
-	movec	%a2,%srp
-	nop
-	cinva	%bc
-	nop
-	pflusha
-	.chip	68k
-	jra	L(mmu_engage_cleanup)
-
-L(mmu_engage_030_temp):
-	.space	12
-L(mmu_engage_030):
-	.chip	68030
-	lea	%pc@(L(mmu_engage_030_temp)),%a0
-	movel	#0x80000002,%a0@
-	movel	%a3,%a0@(4)
-	movel	#0x0808,%d0
-	movec	%d0,%cacr
-	pmove	%a0@,%srp
-	pflusha
-	/*
-	 * enable,super root enable,4096 byte pages,7 bit root index,
-	 * 7 bit pointer index, 6 bit page table index.
-	 */
-	movel	#0x82c07760,%a0@(8)
-	pmove	%a0@(8),%tc	/* enable the MMU */
-	jmp	1f:l
-1:	movel	%a2,%a0@(4)
-	movel	#0x0808,%d0
-	movec	%d0,%cacr
-	pmove	%a0@,%srp
-	pflusha
-	.chip	68k
-
-L(mmu_engage_cleanup):
-	subl	%d2,%a2
-	movel	%a2,L(kernel_pgdir_ptr)
-	subl	%d2,%fp
-	subl	%d2,%sp
-	subl	%d2,ARG0
-	subl	%d2,L(memory_start)
-
-func_return	mmu_engage
+2:	putc('3')
+9:	putc('0')
+	putr()
+		
+	rts
+#endif /* MMU_PRINT */
 
-func_start	mmu_get_root_table_entry,%d0/%a1
+/*
+ *	mmu_clear_root_table
+ *
+ *	%a5 = pointer to the root table
+ *	
+ *	This routine will clear out the kernel root table
+ *
+ *	The root table points to 128 pointer tables.  Because the
+ *	root table describes 32 bits of logical memory, (and there
+ *	are 7 bits of indexing in the root table) there is 25 bits
+ *	of logical address space described by each entry in the
+ *	root table.  2^25 is 32Meg, another way to look at that is
+ *	4Gig / 128 = 32Meg.  Any entry which does not have bit 1 set
+ *	is not a valid entry.  In that case, a reference into that
+ *	memory range will cause a memory exception (bus error).
+ *
+ */
+mmu_clear_root_table:
+	movel	%d0,%sp@-
+	
+	moveq	#ROOT_TABLE_SIZE-1,%d0
+1:	clrl	%a5@(%d0*4)
+	dbra	%d0,1b
 
-#if 0
-	dputs	"mmu_get_root_table_entry:"
-	dputn	ARG1
-	dputs	" ="
-#endif
+	movel	%sp@+,%d0
+	rts
 
-	movel	%pc@(L(kernel_pgdir_ptr)),%a0
-	tstl	%a0
-	jne	2f
+/*
+ *	mmu_clear_pointer_table
+ *
+ *	%a4 = pointer to a pointer table
+ *
+ *	This routine will clear out a pointer table.
+ *	It does NOT link the pointer table into the root table
+ *	(that linkage is done by mapping memory!)
+ */
+mmu_clear_pointer_table:	
+	movel	%d0,%sp@-
+	
+	moveq	#PTR_TABLE_SIZE-1,%d0
+1:	clrl	%a4@(%d0*4)
+	dbra	%d0,1b
 
-	dputs	"\nmmu_init:"
+	movel	%sp@+,%d0
+	rts
 
-	/* Find the start of free memory, get_bi_record does this for us,
-	 * as the bootinfo structure is located directly behind the kernel
-	 * and and we simply search for the last entry.
-	 */
-	get_bi_record	BI_LAST
-	addw	#PAGESIZE-1,%a0
-	movel	%a0,%d0
-	andw	#-PAGESIZE,%d0
+/*
+ *	mmu_clear_page_table
+ *
+ *	%a3 = pointer to a page table
+ *
+ *	This routine will clear out a page table.
+ *	It does NOT link the page table into the pointer table
+ *	(that linkage is done by mapping memory!)
+ */
+mmu_clear_page_table:	
+	movel	%d0,%sp@-
+	
+	moveq	#PAGE_TABLE_SIZE-1,%d0
+1:	clrl	%a3@(%d0*4)
+	dbra	%d0,1b
 
-	dputn	%d0
+	movel	%sp@+,%d0
+	rts
 
-	lea	%pc@(L(memory_start)),%a0
-	movel	%d0,%a0@
-	lea	%pc@(L(kernel_end)),%a0
-	movel	%d0,%a0@
+/*
+ *	mmu_map
+ *
+ *	%a6 = address of free memory above kernel (page aligned)
+ *	%a5 = pointer to the root table
+ *	%a4 = pointer to a pointer table
+ *	%a1 = physical address of mapping
+ *	%a0 = logical address to map
+ *	%d1 = memory type
+ *	%d0 = length of the mapping
+ *
+ *	This routine will map a range of memory using a pointer
+ *	table and allocating the pages on the fly from the kernel.
+ *	The pointer table does not have to be already linked into
+ *	the root table, this routine will do that if necessary.
+ *
+ *	NOTE
+ *	This routine will assert failure and use the Lserial_putc
+ *	routines in the case of a run-time error.  For example,
+ *	if the address to be mapped requires two pointer tables
+ *	this routine will fail and the boot process will terminate.
+ *	A higher level routine would have to be written to call
+ *	this routine multiple times (with different parameters)
+ *	if a single mapping might straddle multiple pointer tables.
+ *
+ *	NOTE-2
+ *	This routine will use early terminating descriptors
+ *	where possible for the 68020+68851 and 68030 type
+ *	processors.
+ */
+mmu_map:
+	moveml	%d0-%d7/%a0-%a5,%sp@-
 
-	/* we have to return the first page at _stext since the init code
-	 * in mm/init.c simply expects kernel_pg_dir there, the rest of
-	 * page is used for further ptr tables in get_ptr_table.
+	/* Calculate the offset in the root table 
 	 */
-	lea	%pc@(SYMBOL_NAME(_stext)),%a0
-	lea	%pc@(L(mmu_cached_pointer_tables)),%a1
-	movel	%a0,%a1@
-	addl	#ROOT_TABLE_SIZE*4,%a1@
+	movel	MAP_LOG,%d5
+	andil	#0xfe000000,%d5
+	roll	#7,%d5
 
-	lea	%pc@(L(mmu_num_pointer_tables)),%a1
-	addql	#1,%a1@
-
-	/* clear the page
+	/* Calculate the offset in the pointer table 
+	 */
+	movel	MAP_LOG,%d4
+	andil	#0x01fc0000,%d4
+	lsrl	#2,%d4
+	swap	%d4
+	
+	/* Calculate the offset in the page table (used on 040's + 060's)
 	 */
-	movel	%a0,%a1
-	movew	#PAGESIZE/4-1,%d0
+	movel	MAP_LOG,%d3
+	andil	#0x0003f000,%d3
+	lsll	#4,%d3
+	swap	%d3
+
+	/*
+	 *	The code that follows implements the following rules
+	 *	with respect to the pointer table for this memory mapping
+	 *	1) If the memory to be mapped lies within an already
+	 *	   mapped region, there will be a pointer table listed
+	 *	   in the root table.  This pointer table must be used.
+	 *	2) If the caller does not supply a pointer table, a table
+	 *	   will be allocated from above the kernel.
+	 *	3) Else, the caller must have passed the address to memory
+	 *	   that will be used as the pointer table for this mapping.
+	 */
+mmu_map_check_root_entry:
+	/* Is another pointer table already mapped into this root entry?
+	 */
+	movel	%a5@(%d5*4),%d2
+	jbeq	mmu_map_check_make_new_pointer_table
+
+	/* If there is an entry already, we must use it
+	 * to preserve existing MMU mapping data!
+	 */
+	andil	#_TABLE_MASK,%d2
+	movel	%d2,%a4
+	jbra	3f
+	
+mmu_map_check_make_new_pointer_table:	
 1:
-	clrl	%a1@+
-	dbra	%d0,1b
+	/* Should we get a pointer table from memory on behalf of the caller?
+	 */
+	tstl	%a4
+	jbne	2f
 
-	lea	%pc@(L(kernel_pgdir_ptr)),%a1
-	movel	%a0,%a1@
+	jbsr	mmu_get_pointer_table
 
-	dputn	%a0
-	dputc	'\n'
 2:
-	movel	ARG1,%d0
-	lea	%a0@(%d0*4),%a0
+	/* Put the pointer table into the root table
+	 */	
+	movel	%a4,%d2
+	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d2
+	movel	%d2,%a5@(%d5*4)
 
-#if 0
-	dputn	%a0
-	dputc	'\n'
-#endif
+3:
+#if defined(CONFIG_M68020) || defined(CONFIG_M68030)
+	/* Split up here, 030's have different logic than 040's
+	 */
+	
+	is_not_040_or_060(mmu_map_030)
+	
+#endif /* CONFIG_M68020 || CONFIG_M68030 */
+
+/*
+ *	MMU 040 & 060 Support
+ *
+ *	The MMU usage for the 040 and 060 is different enough from
+ *	the 030 and 68851 that there is separate code.  This comment
+ *	block describes the data structures and algorithms built by
+ *	this code.
+ *	
+ *	The 040 does not support early terminating descriptors, as
+ *	the 030 does.  Therefore, a third level of table is needed
+ *	for the 040, and that would be the page table.  In Linux,
+ *	page tables are allocated directly from the memory above the
+ *	kernel.  Register A6 points to the memory above kernel and
+ *	it is from that pool that page tables are allocated.
+ *	
+ *	For each page table that is allocated from above the kernel,
+ *	that page table's address has to be put into the pointer table.
+ *	Then, each page table has to be fully prepared.  Page tables,
+ *	by the way, describe a full 256K of memory.  That coincides with
+ *	the fact that a single entry in the pointer table describes
+ *	a 256K of memory because a pointer table entry points to a
+ *	complete page table.  There are 64 entries in the page table,
+ *	and each entry in the page table points to a physical page of
+ *	memory.  Each page is 4K.
+ *	
+ *	Also, there is a label "kpt" which holds the pointer to the
+ *	page table that describes the kernel.  This is only true on
+ *	the 040 and 060 cpu's.  This algorithm, because it's general
+ *	and allows the mapping of arbitrary regions of memory, assumes
+ *	that the first memory mapping is the one which maps the kernel.
+ *	So it's that page table that gets stored at kpt.
+ *	
+ *	Also, it is an error to attempt to map two regions that
+ *	fall within the same 256K range.  For that to work, this routine
+ *	would need to be modified.
+ *	
+ *	
+ *	Last:
+ *	This body of code is present even on 030 systems as this logic
+ *	is used when a block on an 030 machine is not large enough
+ *	to use an entire early terminating page descriptor.  (This
+ *	can happen on the Macintosh when the video begins life at
+ *	physical address 0.)
+ */
+mmu_map_040:
+	/* Enhance the physical address to make a valid page descriptor
+	 */
+	movel	MAP_PHYS,%d2
+	orw	#_PAGE_PRESENT,%d2
+	orw	MAP_CACHE,%d2
+	movel	%d2,MAP_PHYS
+
+	/* Convert address range length into # of pages
+	 */
+	movel	#PAGESHIFT,%d2
+	lsrl	%d2,MAP_LENGTH
+
+mmu_040_loop:
+	/* See if there is an existing page table pointer to use
+	 */
+	movel	%a4@(%d4*4),%d2
+	andil	#_TABLE_MASK,%d2
+	movel	%d2,%a3
+	tstl	%a3
+	jbne	mmu_fill_040_pagetable
+
+	jbsr	mmu_get_page_table
+
+	/* Now, begin assigning physical pages into the page table
+	 */
+mmu_fill_040_pagetable:
+	movel	MAP_PHYS,%a3@(%d3*4)
+
+	/* Decrement page count
+	 */
+	subq	#1,MAP_LENGTH
+	jbeq	mmu_map_done
+
+	/* Increase mapping addresses
+	 */
+	addl	#PAGESIZE,MAP_PHYS
+	addl	#PAGESIZE,MAP_LOG
+
+	/* Have we exhausted this page table?
+	 */
+	addq	#1,%d3
+	cmpil	#PAGE_TABLE_SIZE,%d3
+	jbne	mmu_fill_040_pagetable
+
+	/* Have we exhausted this pointer table?
+	 */
+	clrl	%d3
+	addq	#1,%d4
+	cmpil	#PTR_TABLE_SIZE,%d4
+	jbne	mmu_040_loop
 
-func_return	mmu_get_root_table_entry
+	/* We've exhausted this pointer table... get a new one
+	 */
+	jbsr	mmu_get_pointer_table
+	clrl	%d4	/* %d4 is ptr table index */
+	
+	jbra	mmu_040_loop
 
+/*
+ * mmu_map_revert
+ *
+ * Control gets here if mmu_map_040_tt is called and the conditions
+ * are not correct to employ tt translations.
+ */
+mmu_map_revert:
+	puts("MMU Note- attempting to use mmu_map_tt when not appropriate")
+	putr()
+	jbra	mmu_map
 
+/*
+ * mmu_map_tt
+ * 
+ * This is a specific function which works on all 680x0 machines.
+ * On 040 & 060 it will attempt to use Transparent Translation registers (tt1).
+ * On 020 & 030 it will call the standard mmu_map which will use early
+ * terminating descriptors.
+ */
+mmu_map_tt:
+	is_not_040_or_060(mmu_map)
 
-func_start	mmu_get_ptr_table_entry,%d0/%a1
+#if defined(CONFIG_M68040) || defined(CONFIG_M68060)
+mmu_map_040_tt:
+	moveml	%d0-%d7/%a0-%a4,%sp@-
+#if defined(DEBUG)
+	/*
+	 * Test for Transparent Translation working conditions
+	 */
+	cmpl	MAP_PHYS,MAP_LOG
+	jbne	mmu_map_revert
+
+	movel	MAP_PHYS,%d2
+	andil	#0x00ffffff,%d2
+	jbne	mmu_map_revert
+
+	/* Length must be a power of two
+	 */
+	movel	#0x01000000,%d2
+3:	
+	cmpl	%d2,MAP_LENGTH
+	jbeq	5f
+	lsll	#1,%d2
+	jbne	3b		/* Will terminate when %d2 == 0 */
+	
+	jbra	mmu_map_revert
+5:	
+#endif /* DEBUG */
+
+	subil	#0x01000000,MAP_LENGTH
+	lsrl	#8,MAP_LENGTH
+	movel	MAP_PHYS,%d2
+	orl	MAP_LENGTH,%d2
+	oriw	#0xa000,%d2	/* Enable | Supervisor Only */
+	orb	MAP_CACHE,%d2
 
-#if 0
-	dputs	"mmu_get_ptr_table_entry:"
-	dputn	ARG1
-	dputn	ARG2
-	dputs	" ="
-#endif
+	.chip	68040
+	movec	%d2,%itt1
+	movec	%d2,%dtt1
+	.chip	68k
 
-	movel	ARG1,%a0
-	movel	%a0@,%d0
-	jne	2f
+	moveml	%sp@+,%d0-%d7/%a0-%a4
+	rts
+#endif /* CONFIG_M68040 || CONFIG_M68060) */
 
-	/* Keep track of the number of pointer tables we use
+#if defined(CONFIG_M68020) || defined(CONFIG_M68030)		
+mmu_map_030:
+	/*
+	 * If not a multiple of 256K, use page descriptors.
 	 */
-	dputs	"\nmmu_get_new_ptr_table:"
-	lea	%pc@(L(mmu_num_pointer_tables)),%a0
-	movel	%a0@,%d0
-	addql	#1,%a0@
+	movel	MAP_LENGTH,%d2
+	andl	#(PAGE_TABLE_SIZE*PAGESIZE)-1,%d2
+	jbne	mmu_map_040
 
-	/* See if there is a free pointer table in our cache of pointer tables
+	/* Enhance the MMU mode to make an early terminating descriptor
 	 */
-	lea	%pc@(L(mmu_cached_pointer_tables)),%a1
-	andw	#7,%d0
-	jne	1f
+	movel	MAP_PHYS,%d2
+	orw	#_PAGE_PRESENT,%d2
+	orw	MAP_CACHE,%d2
+	movel	%d2,MAP_PHYS
 
-	/* Get a new pointer table page from above the kernel memory
-	 */
-	get_new_page
-	movel	%a0,%a1@
-1:
-	/* There is an unused pointer table in our cache... use it
+	/* Convert # of pages into # of 256K entries
 	 */
-	movel	%a1@,%d0
-	addl	#PTR_TABLE_SIZE*4,%a1@
-
-	dputn	%d0
-	dputc	'\n'
+	movel	#PTR_INDEX_SHIFT,%d2
+	lsrl	%d2,MAP_LENGTH
 
-	/* Insert the new pointer table into the root table
+	cmpil	#PTR_TABLE_SIZE,MAP_LENGTH
+	jbcc	mmu_030_root_loop
+		
+	/* Since %a5 and %a4 point to a valid root table and
+	 * a valid pointer table, all we need to do is map.
 	 */
-	movel	ARG1,%a0
-	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d0
-	movel	%d0,%a0@
-2:
-	/* Extract the pointer table entry
+mmu_030_ptr_loop:
+	/* Map logical to physical
 	 */
-	andw	#-PTR_TABLE_SIZE,%d0
-	movel	%d0,%a0
-	movel	ARG2,%d0
-	lea	%a0@(%d0*4),%a0
+	movel	MAP_PHYS,%a4@(%d4*4)
 
-#if 0
-	dputn	%a0
-	dputc	'\n'
-#endif
+	/* Decrement number of 256K chunks to map
+	 */
+	subq	#1,MAP_LENGTH
+	jbeq	mmu_map_done
 
-func_return	mmu_get_ptr_table_entry
+	/* Increment mapping addresses
+	 */
+	addl	#PAGE_TABLE_SIZE*PAGESIZE,MAP_LOG
+	addl	#PAGE_TABLE_SIZE*PAGESIZE,MAP_PHYS
 
+	/* Increment pointer table offset
+	 */
+	addq	#1,%d4
+	cmpl	#PTR_TABLE_SIZE,%d4
+	jbne	mmu_030_ptr_loop
+	
+	/* We've exhausted this pointer table... get a new one
+	 */
+	jbsr	mmu_get_pointer_table
+	clrl	%d4	/* ptr table index */
+	
+	jbra	mmu_030_ptr_loop
+
+mmu_030_root_loop:
+	/* Early terminating descriptor 32M entry
+	 */
+	movel	MAP_PHYS,%a5@(%d5*4)
+
+	/* Decrement number of 32M chunks to map
+	 */
+	subl	#PTR_TABLE_SIZE,MAP_LENGTH
+	jbeq	mmu_map_done
+	cmpl	#PTR_TABLE_SIZE,MAP_LENGTH
+	jbcs	mmu_030_ptr_loop
+
+	/* Increment mapping addresses
+	 */
+	addl	#PTR_TABLE_SIZE*PAGE_TABLE_SIZE*PAGESIZE,MAP_LOG
+	addl	#PTR_TABLE_SIZE*PAGE_TABLE_SIZE*PAGESIZE,MAP_PHYS
+	
+	/* Increment root table offset
+	 */
+	addq	#1,%d5
+	cmpl	#ROOT_TABLE_SIZE,%d5
+	jbne	mmu_030_root_loop
+#if defined(DEBUG)
+	/* We're trying to map past 0xFFFF.FFFF
+	 */
+	moveq	#7,%d7
+	jbra	mmu_err
+#endif /* DEBUG */
+#endif /* CONFIG_M68020 || CONFIG_M68030 */
+	
+mmu_map_done:
+	moveml	%sp@+,%d0-%d7/%a0-%a5
+	rts
 
-func_start	mmu_get_page_table_entry,%d0/%a1
+#if defined(DEBUG)
+mmu_err:
+	movel	%d7,%d3
+	putr()
+	puts("Error: ")
+
+	movel	%d3,%d7
+	jbsr	Lserial_putnum
+
+	puts("Logical Address #0x")
+	putn(MAP_LOG)
+	putr()
+
+	puts("Physical Address #0x")
+	putn(MAP_PHYS)
+	putr()
+
+	puts("Length #0x")
+	putn(MAP_LENGTH)
+	putr()
+
+
+	puts("Cache bits #0x")
+	putn(MAP_CACHE)
+	putr()
 
-#if 0
-	dputs	"mmu_get_page_table_entry:"
-	dputn	ARG1
-	dputn	ARG2
-	dputs	" ="
+#if defined(MMU_PRINT)
+	jbsr	mmu_print
 #endif
+	
+1:
+	jbra	1b
+#endif	/* DEBUG */
 
-	movel	ARG1,%a0
-	movel	%a0@,%d0
-	jne	2f
+/*
+ *	mmu_get_page_table
+ *	
+ *	This routine will get page table from above the kernel.
+ *	What's most interesting about this routine is that it is
+ *	capable of getting up to 16 page tables out of a page of
+ *	memory.  While this is the same algorithm used by the kernel
+ *	later on, it is also what is used down here.
+ */
+mmu_get_page_table:
+	moveml	%a0-%a2/%d0-%d2,%sp@-
 
-	/* If the page table entry doesn't exist, we allocate a complete new
-	 * page and use it as one continues big page table which can cover
-	 * 4MB of memory, nearly almost all mappings have that alignment.
+	/* Keep track of the number of pointer tables we use
 	 */
-	get_new_page
-	addw	#_PAGE_TABLE+_PAGE_ACCESSED,%a0
-
-	/* align pointer table entry for a page of page tables
+	lea	%pc@(Lmmu_num_page_tables),%a2
+	addql	#1,%a2@
+	
+	/* See if there is a page table in our cache of page tables
 	 */
-	movel	ARG1,%d0
-	andw	#-(PAGESIZE/PAGE_TABLE_SIZE),%d0
-	movel	%d0,%a1
+	lea	%pc@(SYMBOL_NAME(Lmmu_cached_page_tables)),%a2
+	movel	%a2@,%d2
+	jbne	1f
 
-	/* Insert the page tables into the pointer entries
+	/* The first time through this algorithm, we've got to get a page
 	 */
-	moveq	#PAGESIZE/PAGE_TABLE_SIZE/4-1,%d0
-1:
-	movel	%a0,%a1@+
-	lea	%a0@(PAGE_TABLE_SIZE*4),%a0
-	dbra	%d0,1b
+	movel	%a6,%d2
+	addw	#PAGESIZE,%a6	/* allocate page for 16 page tables */
 
-	/* Now we can get the initialized pointer table entry
+1:	/* There is an unused page table in our cache... use it
 	 */
-	movel	ARG1,%a0
-	movel	%a0@,%d0
+	movel	%d2,%a3
+	addil	#PAGE_TABLE_SIZE*4,%d2
+	movel	%d2,%a2@
+
+	/* Basically this is (PAGESIZE-1)-(PAGE_TABLE_SIZE*4-1), but
+	 * the two -1 can be eliminated.
+	 * The condition is true if the current page table is at the
+	 * start of the next page. */
+	andil	#PAGESIZE-PAGE_TABLE_SIZE*4,%d2
+	jbne	2f
+
+	/* Get the page table from above the kernel memory
+	 */	
+	movel	%a6,%a2@
+	addw	#PAGESIZE,%a6	/* allocate page for 16 page tables */
+	
 2:
-	/* Extract the page table entry
-	 */
-	andw	#-PAGE_TABLE_SIZE,%d0
-	movel	%d0,%a0
-	movel	ARG2,%d0
-	lea	%a0@(%d0*4),%a0
-
-#if 0
-	dputn	%a0
-	dputc	'\n'
-#endif
-
-func_return	mmu_get_page_table_entry
-
+	lea	%pc@(SYMBOL_NAME(kpt)),%a2
+	tstl	%a2@
+	jbne	3f
+	movel	%a3,%a2@
+	
+3:	jbsr	mmu_clear_page_table
+
+	/* Log this page table (%a3) in the pointer table (%a4)
+	 */
+	movel	%a3,%d2
+	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d2
+	movel	%d2,%a4@(%d4*4)
+	
+	moveml	%sp@+,%a0-%a2/%d0-%d2
+	rts
+	
 /*
- *	get_new_page
- *
- *	Return a new page from the memory start and clear it.
+ *	mmu_get_pointer_table
+ *	
+ *	This routine will get page table from above the kernel.
+ *	What's most interesting about this routine is that it is
+ *	capable of getting up to 8 page tables out of a page of
+ *	memory.  While this is the same algorithm used by the kernel
+ *	later on, it is also what is used down here.
  */
-func_start	get_new_page,%d0/%a1
+mmu_get_pointer_table:
+	moveml	%a0-%a2/%d0-%d2,%sp@-
 
-	dputs	"\nget_new_page:"
-
-	/* allocate the page and adjust memory_start
+	/* Keep track of the number of pointer tables we use
 	 */
-	lea	%pc@(L(memory_start)),%a0
-	movel	%a0@,%a1
-	addl	#PAGESIZE,%a0@
+	lea	%pc@(Lmmu_num_pointer_tables),%a2
+	addql	#1,%a2@
+	
+	/* See if there is a pointer table in our cache of pointer tables
+	 */
+	lea	%pc@(Lmmu_cached_pointer_tables),%a2
+	movel	%a2@,%d2
+	jbne	1f
 
-	/* clear the new page
+	/* The first time through this algorithm, we've got to get a page
 	 */
-	movel	%a1,%a0
-	movew	#PAGESIZE/4-1,%d0
-1:
-	clrl	%a1@+
-	dbra	%d0,1b
+	movel	%a6,%d2
+	addw	#PAGESIZE,%a6	/* allocate page for 8 ptr tables */
 
-	dputn	%a0
-	dputc	'\n'
+1:	/* There is an unused pointer table in our cache... use it
+	 */
+	movel	%d2,%a4
+	addil	#PTR_TABLE_SIZE*4,%d2
+	movel	%d2,%a2@
 
-func_return	get_new_page
+	/* Did we just hand out the last ptr table in the cache page?
+	 */
+	andil	#PAGESIZE-PTR_TABLE_SIZE*4,%d2
+	jbne	2f
 
+	/* Get a new cache-of-ptr-tables page from above the kernel memory
+	 */	
+	movel	%a6,%a2@
+	addw	#PAGESIZE,%a6	/* allocate page for 8 ptr tables */
+2:
+	jbsr	mmu_clear_pointer_table
 
+	/* Log this pointer table (%a4) in the root table (%a5)
+	 */
+	movel	%a4,%d2
+	orw	#_PAGE_TABLE+_PAGE_ACCESSED,%d2
+	movel	%d2,%a5@(%d5*4)
+	
+	moveml	%sp@+,%a0-%a2/%d0-%d2
+	rts
+	
 
 /*
  * Debug output support
@@ -2500,9 +2748,9 @@
  * from the MFP or a serial port of the SCC
  */
 
-#ifdef CONFIG_MAC
+#if defined(CONFIG_MAC)
 
-L(scc_initable_mac):
+scc_initable_mac:
 	.byte	9,12		/* Reset */
 	.byte	4,0x44		/* x16, 1 stopbit, no parity */
 	.byte	3,0xc0		/* receiver: 8 bpc */
@@ -2518,7 +2766,7 @@
 	.even
 #endif
 
-#ifdef CONFIG_ATARI
+#if defined(CONFIG_ATARI)
 /* #define USE_PRINTER */
 /* #define USE_SCC_B */
 /* #define USE_SCC_A */
@@ -2527,7 +2775,7 @@
 #if defined(USE_SCC_A) || defined(USE_SCC_B)
 #define USE_SCC
 /* Initialisation table for SCC */
-L(scc_initable):
+scc_initable:
 	.byte	9,12		/* Reset */
 	.byte	4,0x44		/* x16, 1 stopbit, no parity */
 	.byte	3,0xc0		/* receiver: 8 bpc */
@@ -2543,7 +2791,7 @@
 	.even
 #endif
 
-#ifdef USE_PRINTER
+#if defined(USE_PRINTER)
 
 LPSG_SELECT	= 0xff8800
 LPSG_READ	= 0xff8800
@@ -2556,7 +2804,7 @@
 LSTMFP_IERB	= 0xfffa09
 
 #elif defined(USE_SCC_B)
-
+ 
 LSCC_CTRL	= 0xff8c85
 LSCC_DATA	= 0xff8c87
 
@@ -2566,7 +2814,7 @@
 LSCC_DATA	= 0xff8c83
 
 /* Initialisation table for SCC */
-L(scc_initable):
+scc_initable:
 	.byte	9,12		/* Reset */
 	.byte	4,0x44		/* x16, 1 stopbit, no parity */
 	.byte	3,0xc0		/* receiver: 8 bpc */
@@ -2595,13 +2843,22 @@
 /*
  * Serial port output support.
  */
+LSERPER      = 0xdff032
+LSERDAT      = 0xdff030
+LSERDATR     = 0xdff018
+LNTSC_PERIOD = 371
+LPAL_PERIOD  = 368
+LNTSC_ECLOCK = 7159090
+LSERIAL_CNTRL = 0xbfd000
+LSERIAL_DTR   = 7
 
 /*
  * Initialize serial port hardware for 9600/8/1
  */
-func_start	serial_init,%d0/%d1/%a0/%a1
+	.even
+Lserial_init:
 	/*
-	 *	Some of the register usage that follows
+ 	 *	Some of the register usage that follows
 	 *	CONFIG_AMIGA
 	 *		a0 = pointer to boot info record
 	 *		d0 = boot info offset
@@ -2614,23 +2871,20 @@
 	 *		a1 = address of scc_initable_mac
 	 *		d0 = init data for serial port
 	 */
+	moveml	%a0-%a1/%d0,%sp@-
 
 #ifdef CONFIG_AMIGA
-#define SERIAL_DTR	7
-#define SERIAL_CNTRL	CIABBASE+C_PRA
-
 	is_not_amiga(1f)
-	lea	%pc@(L(custom)),%a0
-	movel	#-ZTWOBASE,%a0@
-	bclr	#SERIAL_DTR,SERIAL_CNTRL-ZTWOBASE
-	get_bi_record	BI_AMIGA_SERPER
-	movew	%a0@,CUSTOMBASE+C_SERPER-ZTWOBASE
-|	movew	#61,CUSTOMBASE+C_SERPER-ZTWOBASE
+        bclr    #LSERIAL_DTR,LSERIAL_CNTRL
+        movew   #BI_AMIGA_SERPER,%d0
+        jbsr    Lget_bi_record
+        movew   %a0@,LSERPER
+        jra     9f
 1:
 #endif
-#ifdef CONFIG_ATARI
+#if defined(CONFIG_ATARI)
 	is_not_atari(4f)
-	movel	%pc@(L(iobase)),%a1
+	movel	%pc@(Liobase),%a1
 #if defined(USE_PRINTER)
 	bclr	#0,%a1@(LSTMFP_IERB)
 	bclr	#0,%a1@(LSTMFP_DDR)
@@ -2644,7 +2898,7 @@
 	moveb	%d0,%a1@(LPSG_WRITE)
 #elif defined(USE_SCC)
 	lea	%a1@(LSCC_CTRL),%a0
-	lea	%pc@(L(scc_initable)),%a1
+	lea	%pc@(scc_initable:w),%a1
 2:	moveb	%a1@+,%d0
 	jmi	3f
 	moveb	%d0,%a0@
@@ -2659,11 +2913,11 @@
 	orb	#1,%a1@(LMFP_TDCDR)
 	bset	#1,%a1@(LMFP_TSR)
 #endif
-	jra	L(serial_init_done)
-4:
+	jra	9f
+4:	
 #endif
-#ifdef CONFIG_MAC
-	is_not_mac(L(serial_init_not_mac))
+#if defined(CONFIG_MAC)
+	is_not_mac(Lserial_init_not_mac)
 #ifdef MAC_SERIAL_DEBUG
 #if !defined(MAC_USE_SCC_A) && !defined(MAC_USE_SCC_B)
 #define MAC_USE_SCC_B
@@ -2675,8 +2929,8 @@
 
 #ifdef MAC_USE_SCC_A
 	/* Initialize channel A */
-	movel	%pc@(L(mac_sccbase)),%a0
-	lea	%pc@(L(scc_initable_mac)),%a1
+	movel	%pc@(SYMBOL_NAME(mac_sccbase)),%a0
+	lea	%pc@(scc_initable_mac:w),%a1
 5:	moveb	%a1@+,%d0
 	jmi	6f
 	moveb	%d0,%a0@(mac_scc_cha_a_ctrl_offset)
@@ -2688,9 +2942,9 @@
 #ifdef MAC_USE_SCC_B
 	/* Initialize channel B */
 #ifndef MAC_USE_SCC_A	/* Load mac_sccbase only if needed */
-	movel	%pc@(L(mac_sccbase)),%a0
+	movel	%pc@(SYMBOL_NAME(mac_sccbase)),%a0
 #endif	/* MAC_USE_SCC_A */
-	lea	%pc@(L(scc_initable_mac)),%a1
+	lea	%pc@(scc_initable_mac:w),%a1
 7:	moveb	%a1@+,%d0
 	jmi	8f
 	moveb	%d0,%a0@(mac_scc_cha_b_ctrl_offset)
@@ -2700,177 +2954,168 @@
 #endif	/* MAC_USE_SCC_B */
 #endif	/* MAC_SERIAL_DEBUG */
 
-	jra	L(serial_init_done)
-L(serial_init_not_mac):
+	jra	9f
+Lserial_init_not_mac:
 #endif	/* CONFIG_MAC */
 
-L(serial_init_done):
-func_return	serial_init
+9:
+ 	moveml	%sp@+,%a0-%a1/%d0
+	rts
 
 /*
- * Output character on serial port.
+ * Output character in d7 on serial port.
+ * d7 thrashed.
  */
-func_start	serial_putc,%d0/%d1/%a0/%a1
-
-	movel	ARG1,%d0
-	cmpib	#'\n',%d0
+Lserial_putc:
+	cmpib	#'\n',%d7
 	jbne	1f
+	
+	putc(13)	/* A little safe recursion is good for the soul */
+	moveb	#'\n',%d7
+1:	
+	moveml	%a0/%a1,%sp@-
 
-	/* A little safe recursion is good for the soul */
-	serial_putc	#'\r'
-1:
-
-#ifdef CONFIG_AMIGA
+#if defined(CONFIG_AMIGA)
 	is_not_amiga(2f)
-	andw	#0x00ff,%d0
-	oriw	#0x0100,%d0
-	movel	%pc@(L(custom)),%a0
-	movew	%d0,%a0@(CUSTOMBASE+C_SERDAT)
-1:	movew	%a0@(CUSTOMBASE+C_SERDATR),%d0
-	andw	#0x2000,%d0
+	andw	#0x00ff,%d7
+	oriw	#0x0100,%d7
+	movel	%pc@(Lcustom),%a1
+	movew	%d7,%a1@(LSERDAT)
+1:	movew	%a1@(LSERDATR),%d7
+	andw	#0x2000,%d7
 	jeq	1b
-	jra	L(serial_putc_done)
+	jra	Lserial_putc_done
 2:
 #endif
 
-#ifdef CONFIG_MAC
+#if defined(CONFIG_MAC)
 	is_not_mac(5f)
 
-#ifdef CONSOLE
-	console_putc	%d0
+#if defined(CONSOLE)
+	jbsr	Lconsole_putc
 #endif /* CONSOLE */
 
-#ifdef MAC_SERIAL_DEBUG
+#if defined(MAC_SERIAL_DEBUG)
 
 #ifdef MAC_USE_SCC_A
-	movel	%pc@(L(mac_sccbase)),%a1
+	movel	%pc@(SYMBOL_NAME(mac_sccbase)),%a1
 3:	btst	#2,%a1@(mac_scc_cha_a_ctrl_offset)
 	jeq	3b
-	moveb	%d0,%a1@(mac_scc_cha_a_data_offset)
+	moveb	%d7,%a1@(mac_scc_cha_a_data_offset)
 #endif	/* MAC_USE_SCC_A */
 
 #ifdef MAC_USE_SCC_B
 #ifndef MAC_USE_SCC_A	/* Load mac_sccbase only if needed */
-	movel	%pc@(L(mac_sccbase)),%a1
+	movel	%pc@(SYMBOL_NAME(mac_sccbase)),%a1
 #endif	/* MAC_USE_SCC_A */
 4:	btst	#2,%a1@(mac_scc_cha_b_ctrl_offset)
 	jeq	4b
-	moveb	%d0,%a1@(mac_scc_cha_b_data_offset)
+	moveb	%d7,%a1@(mac_scc_cha_b_data_offset)
 #endif	/* MAC_USE_SCC_B */
 
 #endif	/* MAC_SERIAL_DEBUG */
 
-	jra	L(serial_putc_done)
-5:
+	jra	Lserial_putc_done
+5:	
 #endif	/* CONFIG_MAC */
 
-#ifdef CONFIG_ATARI
+#if defined(CONFIG_ATARI)
 	is_not_atari(4f)
-	movel	%pc@(L(iobase)),%a1
+	movel	%pc@(Liobase),%a1
 #if defined(USE_PRINTER)
 3:	btst	#0,%a1@(LSTMFP_GPIP)
 	jne	3b
 	moveb	#LPSG_IO_B,%a1@(LPSG_SELECT)
-	moveb	%d0,%a1@(LPSG_WRITE)
+	moveb	%d7,%a1@(LPSG_WRITE)
 	moveb	#LPSG_IO_A,%a1@(LPSG_SELECT)
-	moveb	%a1@(LPSG_READ),%d0
-	bclr	#5,%d0
-	moveb	%d0,%a1@(LPSG_WRITE)
+	moveb	%a1@(LPSG_READ),%d7
+	bclr	#5,%d7
+	moveb	%d7,%a1@(LPSG_WRITE)
 	nop
 	nop
-	bset	#5,%d0
-	moveb	%d0,%a1@(LPSG_WRITE)
+	bset	#5,%d7
+	moveb	%d7,%a1@(LPSG_WRITE)
 #elif defined(USE_SCC)
 3:	btst	#2,%a1@(LSCC_CTRL)
 	jeq	3b
-	moveb	%d0,%a1@(LSCC_DATA)
+	moveb	%d7,%a1@(LSCC_DATA)
 #elif defined(USE_MFP)
 3:	btst	#7,%a1@(LMFP_TSR)
 	jeq	3b
-	moveb	%d0,%a1@(LMFP_UDR)
+	moveb	%d7,%a1@(LMFP_UDR)
 #endif
-	jra	L(serial_putc_done)
+	jra	Lserial_putc_done
 4:
 #endif	/* CONFIG_ATARI */
 
-#ifdef CONFIG_MVME16x
+#if defined(CONFIG_MVME16x)
 	is_not_mvme16x(2f)
 	/*
 	 * The VME 16x class has PROM support for serial output
 	 * of some kind;  the TRAP table is still valid.
 	 */
 	moveml	%d0-%d7/%a2-%a6,%sp@-
-	moveb	%d0,%sp@-
-	trap	#15
-	.word	0x0020	/* TRAP 0x020 */
+	moveb	%d7,%sp@-
+	.long	0x4e4f0020	/* TRAP 0x020 */
 	moveml	%sp@+,%d0-%d7/%a2-%a6
-	jbra	L(serial_putc_done)
+	jbra	Lserial_putc_done
 2:
 #endif CONFIG_MVME162 | CONFIG_MVME167
 
-#ifdef CONFIG_BVME6000
+#if defined(CONFIG_BVME6000)
 	is_not_bvme6000(2f)
 	/*
 	 * The BVME6000 machine has a serial port ...
 	 */
 1:	btst	#2,BVME_SCC_CTRL_A
 	jeq	1b
-	moveb	%d0,BVME_SCC_DATA_A
-	jbra	L(serial_putc_done)
-2:
+	moveb	%d7,BVME_SCC_DATA_A
+	jbra	Lserial_putc_done
+2:	
 #endif
 
-L(serial_putc_done):
-func_return	serial_putc
+Lserial_putc_done:
+	moveml	%sp@+,%a0/%a1
+	rts
 
 /*
- * Output a string.
+ * Output string pointed to by a0 to serial port.
+ * a0 trashed.
  */
-func_start	puts,%d0/%a0
-
-	movel	ARG1,%a0
-	jra	2f
-1:
-#ifdef CONSOLE
-	console_putc	%d0
-#endif 
-#ifdef SERIAL_DEBUG
-	serial_putc	%d0
-#endif
-2:	moveb	%a0@+,%d0
-	jne	1b
-
-func_return	puts
+Lserial_puts:
+	movel	%d7,%sp@-
+1:	moveb	%a0@+,%d7
+	jeq	2f
+	jbsr	Lserial_putc
+	jra	1b
+2:	movel	%sp@+,%d7
+	rts
 
 /*
- * Output number in hex notation.
+ * Output number in d7 in hex notation on serial port.
  */
 
-func_start	putn,%d0-%d2
-
-	putc	' '
-
-	movel	ARG1,%d0
-	moveq	#7,%d1
-1:	roll	#4,%d0
-	move	%d0,%d2
-	andb	#0x0f,%d2
-	addb	#'0',%d2
-	cmpb	#'9',%d2
-	jls	2f
-	addb	#'A'-('9'+1),%d2
-2:
-#ifdef CONSOLE
-	console_putc	%d2
-#endif 
-#ifdef SERIAL_DEBUG
-	serial_putc	%d2
-#endif
-	dbra	%d1,1b
-
-func_return	putn
+Lserial_putnum:
+	moveml	%d0-%d2/%d7,%sp@-
+	movel	%d7,%d1
+	moveq	#4,%d0
+	moveq	#7,%d2
+L1:	roll	%d0,%d1
+	moveb	%d1,%d7
+	andb	#0x0f,%d7
+	cmpb	#0x0a,%d7
+	jcc	1f
+	addb	#'0',%d7
+	jra	2f
+1:	addb	#'A'-10,%d7
+2:	jbsr	Lserial_putc
+	dbra	%d2,L1
+	moveq	#32,%d7
+	jbsr	Lserial_putc
+	moveml	%sp@+,%d0-%d2/%d7
+	rts
 
-#ifdef CONFIG_MAC
+#if defined(CONFIG_MAC)
 /*
  *	mac_serial_print
  *
@@ -2885,32 +3130,21 @@
  *	simple strings!
  */
 ENTRY(mac_serial_print)
-	moveml	%d0/%a0,%sp@-
+	movel	%a0,%sp@-
 #if 1
 	move	%sr,%sp@-
 	ori	#0x0700,%sr
 #endif
 	movel	%sp@(10),%a0		/* fetch parameter */
-	jra	2f
-1:	serial_putc	%d0
-2:	moveb	%a0@+,%d0
-	jne	1b
+	jbsr	Lserial_puts
 #if 1
 	move	%sp@+,%sr
 #endif
-	moveml	%sp@+,%d0/%a0
+	movel	%sp@+,%a0
 	rts
 #endif /* CONFIG_MAC */
 
-#ifdef CONFIG_HP300
-func_start	set_leds,%d0/%a0
-	movel	ARG1,%d0
-	movel	%pc@(Lcustom),%a0
-	moveb	%d0,%a0@(0x1ffff)
-func_return	set_leds
-#endif
-
-#ifdef CONSOLE
+#if defined(CONSOLE)
 /*
  *	For continuity, see the data alignment
  *	to which this structure is tied.
@@ -2922,9 +3156,9 @@
 #define Lconsole_struct_left_edge	16
 #define Lconsole_struct_penguin_putc	20
 
-L(console_init):
+Lconsole_init:
 	/*
-	 *	Some of the register usage that follows
+ 	 *	Some of the register usage that follows
 	 *		a0 = pointer to boot_info
 	 *		a1 = pointer to screen
 	 *		a2 = pointer to Lconsole_globals
@@ -2937,13 +3171,13 @@
 	 *		d6 = number of bytes on the entire screen
 	 */
 	moveml	%a0-%a4/%d0-%d7,%sp@-
-
-	lea	%pc@(L(console_globals)),%a2
-	lea	%pc@(L(mac_videobase)),%a0
+	
+	lea	%pc@(SYMBOL_NAME(Lconsole_globals)),%a2
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
 	movel	%a0@,%a1
-	lea	%pc@(L(mac_rowbytes)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
 	movel	%a0@,%d5
-	lea	%pc@(L(mac_dimensions)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_dimensions)),%a0
 	movel	%a0@,%d3	/* -> low byte */
 	movel	%d3,%d4
 	swap	%d4		/* -> high byte */
@@ -2955,7 +3189,7 @@
 	mulul	%d4,%d6		/* scan line bytes x num scan lines */
 	divul	#8,%d6		/* we'll clear 8 bytes at a time */
 	subq	#1,%d6
-
+	
 console_clear_loop:
 	movel	#0xffffffff,%a1@+	/* Mac_black */
 	movel	#0xffffffff,%a1@+	/* Mac_black */
@@ -2977,21 +3211,21 @@
 	 *	At this point we make a shift in register usage
 	 *	a1 = address of Lconsole_font pointer
 	 */
-	lea	%pc@(L(console_font)),%a1
+	lea	%pc@(SYMBOL_NAME(Lconsole_font)),%a1
 	movel	%a0,%a1@	/* store pointer to struct fbcon_font_desc in Lconsole_font */
 
 	/*
 	 *	Calculate global maxs
-	 *	Note - we can use either an
+	 *	Note - we can use either an 
 	 *	8 x 16 or 8 x 8 character font
 	 *	6 x 11 also supported
 	 */
 		/* ASSERT: a0 = contents of Lconsole_font */
 	movel	%d3,%d0			/* screen width in pixels */
-	divul	%a0@(FBCON_FONT_DESC_WIDTH),%d0		/* d0 = max num chars per row */
+	divul	%a0@(FBCON_FONT_DESC_width),%d0		/* d0 = max num chars per row */
 
 	movel	%d4,%d1			 /* screen height in pixels */
-	divul	%a0@(FBCON_FONT_DESC_HEIGHT),%d1	 /* d1 = max num rows */
+	divul	%a0@(FBCON_FONT_DESC_height),%d1	 /* d1 = max num rows */
 
 	movel	%d0,%a2@(Lconsole_struct_num_columns)
 	movel	%d1,%a2@(Lconsole_struct_num_rows)
@@ -3009,49 +3243,73 @@
 	moveml	%sp@+,%a0-%a4/%d0-%d7
 	rts
 
-L(console_put_stats):
+Lconsole_put_stats:
 	/*
-	 *	Some of the register usage that follows
+ 	 *	Some of the register usage that follows
 	 *		a0 = pointer to boot_info
 	 *		d7 = value of boot_info fields
 	 */
 	moveml	%a0/%d7,%sp@-
 
-	puts	"\nMacLinux\n\n"
-
-#ifdef SERIAL_DEBUG
-	puts	" vidaddr:"
-	putn	%pc@(L(mac_videobase))		/* video addr. */
+	putr()
+	puts("MacLinux")
+	putr()
+	putr()
+
+#if defined(SERIAL_DEBUG)
+	puts(" vidaddr:")
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
+	movel	%a0@,%d7			/* video addr. */
+	jbsr	Lserial_putnum			/* This redirects to console */
+	putr()
 
-	puts	"\n  _stext:"
+	puts("  _stext:")
 	lea	%pc@(SYMBOL_NAME(_stext)),%a0
-	putn	%a0
+	movel	%a0,%d7		/* get start addr. */
+	jbsr	Lserial_putnum
+	putr()
 
-	puts	"\nbootinfo:"
+	puts("bootinfo:")	
 	lea	%pc@(SYMBOL_NAME(_end)),%a0
-	putn	%a0
-
-	puts	"\ncpuid:"
-	putn	%pc@(L(cputype))
-	putc	'\n'
+	movel	%a0, %d7	/* write start addr. */
+	jbsr	Lserial_putnum
+	putr()
+
+	puts("     kpt:")
+	lea	%pc@(SYMBOL_NAME(kpt)),%a0
+	movel	%a0,%d7		/* get start addr. */
+	jbsr	Lserial_putnum
+	putr()
+
+	puts("    *kpt:")
+	lea	%pc@(SYMBOL_NAME(kpt)),%a0
+	movel	%a0@,%d7	/* get start addr. */
+	jbsr	Lserial_putnum
+	putr()
+
+	puts("cpuid:")
+	lea	%pc@(SYMBOL_NAME(Lcputype)),%a0
+	movel	%a0@,%d7
+	jbsr	Lserial_putnum
+	putr()
 
 #  if defined(MMU_PRINT)
 	jbsr	mmu_print_machine_cpu_types
 #  endif /* MMU_PRINT */
 #endif /* SERIAL_DEBUG */
-
+	
 	moveml	%sp@+,%a0/%d7
 	rts
 
 #ifdef CONSOLE_PENGUIN
-L(console_put_penguin):
+Lconsole_put_penguin:
 	/*
 	 *	Get 'that_penguin' onto the screen in the upper right corner
 	 *	penguin is 64 x 74 pixels, align against right edge of screen
 	 */
 	moveml	%a0-%a1/%d0-%d7,%sp@-
 
-	lea	%pc@(L(mac_dimensions)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_dimensions)),%a0
 	movel	%a0@,%d0
 	andil	#0xffff,%d0
 	subil	#64,%d0		/* snug up against the right edge */
@@ -3085,20 +3343,20 @@
 	 * Calculate source and destination addresses
 	 *	output	a1 = dest
 	 *		a2 = source
-	 */
-	lea	%pc@(L(mac_videobase)),%a0
+	 */	
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
 	movel	%a0@,%a1
 	movel	%a1,%a2
-	lea	%pc@(L(mac_rowbytes)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
 	movel	%a0@,%d5
-	movel	%pc@(L(console_font)),%a0
-	mulul	%a0@(FBCON_FONT_DESC_HEIGHT),%d5	/* account for # scan lines per character */
+	movel	%pc@(SYMBOL_NAME(Lconsole_font)),%a0
+	mulul	%a0@(FBCON_FONT_DESC_height),%d5	/* account for # scan lines per character */
 	addal	%d5,%a2
 
 	/*
 	 * Get dimensions
 	 */
-	lea	%pc@(L(mac_dimensions)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_dimensions)),%a0
 	movel	%a0@,%d3
 	movel	%d3,%d4
 	swap	%d4
@@ -3108,14 +3366,14 @@
 	/*
 	 * Calculate number of bytes to move
 	 */
-	lea	%pc@(L(mac_rowbytes)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
 	movel	%a0@,%d6
-	movel	%pc@(L(console_font)),%a0
-	subl	%a0@(FBCON_FONT_DESC_HEIGHT),%d4	/* we're not scrolling the top row! */
+	movel	%pc@(SYMBOL_NAME(Lconsole_font)),%a0
+	subl	%a0@(FBCON_FONT_DESC_height),%d4	/* we're not scrolling the top row! */
 	mulul	%d4,%d6		/* scan line bytes x num scan lines */
 	divul	#32,%d6		/* we'll move 8 longs at a time */
 	subq	#1,%d6
-
+	
 console_scroll_loop:
 	movel	%a2@+,%a1@+
 	movel	%a2@+,%a1@+
@@ -3127,10 +3385,10 @@
 	movel	%a2@+,%a1@+
 	dbra	%d6,console_scroll_loop
 
-	lea	%pc@(L(mac_rowbytes)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
 	movel	%a0@,%d6
-	movel	%pc@(L(console_font)),%a0
-	mulul	%a0@(FBCON_FONT_DESC_HEIGHT),%d6	/* scan line bytes x font height */
+	movel	%pc@(SYMBOL_NAME(Lconsole_font)),%a0
+	mulul	%a0@(FBCON_FONT_DESC_height),%d6	/* scan line bytes x font height */
 	divul	#32,%d6			/* we'll move 8 words at a time */
 	subq	#1,%d6
 
@@ -3145,25 +3403,19 @@
 	movel	%d0,%a1@+
 	movel	%d0,%a1@+
 	dbra	%d6,console_scroll_clear_loop
-
+	
 	moveml	%sp@+,%a0-%a4/%d0-%d7
 	rts
+	
 
-
-func_start	console_putc,%a0/%a1/%d0-%d7
-
-	is_not_mac(console_exit)
-
-	/* Output character in d7 on console.
-	 */
-	movel	ARG1,%d7
-	cmpib	#'\n',%d7
-	jbne	1f
-
-	/* A little safe recursion is good for the soul */
-	console_putc	#'\r'
-1:
-	lea	%pc@(L(console_globals)),%a0
+	
+Lconsole_putc:	
+/*
+ * Output character in d7 on console.
+ */
+	moveml	%a0/%a1/%d0-%d7,%sp@-
+	
+	lea	%pc@(Lconsole_globals),%a0
 
 	cmpib	#10,%d7
 	jne	console_not_lf
@@ -3178,7 +3430,7 @@
 	jbsr	console_scroll
 1:
 	jra	console_exit
-
+	
 console_not_lf:
 	cmpib	#13,%d7
 	jne	console_not_cr
@@ -3191,7 +3443,7 @@
 	clrl	%a0@(Lconsole_struct_cur_row)
 	clrl	%a0@(Lconsole_struct_cur_column)
 	jra	console_exit
-
+	
 /*
  *	At this point we know that the %d7 character is going to be
  *	rendered on the screen.  Register usage is -
@@ -3204,24 +3456,26 @@
 console_not_home:
 	movel	%a0@(Lconsole_struct_cur_column),%d0
 	addil	#1,%a0@(Lconsole_struct_cur_column)
-	movel	%a0@(Lconsole_struct_num_columns),%d1
+	movel	%a0@(Lconsole_struct_num_columns),%d1	
 	cmpl	%d1,%d0
 	jcs	1f
-	putc	'\n'	/* recursion is OK! */
-1:
+	movel	%d7,%sp@-
+	putr()		/* recursion is OK! */
+	movel	%sp@+,%d7
+1:	
 	movel	%a0@(Lconsole_struct_cur_row),%d1
-
+	
 	/*
 	 *	At this point we make a shift in register usage
-	 *	a0 = address of pointer to font data (fbcon_font_desc)
+ 	 *	a0 = address of pointer to font data (fbcon_font_desc)
 	 */
-	movel	%pc@(L(console_font)),%a0
-	movel	%a0@(FBCON_FONT_DESC_DATA),%a1	/* Load fbcon_font_desc.data into a1 */
+	movel	%pc@(SYMBOL_NAME(Lconsole_font)),%a0
+	movel	%a0@(FBCON_FONT_DESC_data),%a1	/* Load fbcon_font_desc.data into a1 */
 	andl	#0x000000ff,%d7
 		/* ASSERT: a0 = contents of Lconsole_font */
-	mulul	%a0@(FBCON_FONT_DESC_HEIGHT),%d7	/* d7 = index into font data */
+	mulul	%a0@(FBCON_FONT_DESC_height),%d7	/* d7 = index into font data */
 	addl	%d7,%a1			/* a1 = points to char image */
-
+	
 	/*
 	 *	At this point we make a shift in register usage
 	 *	d0 = pixel coordinate, x
@@ -3232,15 +3486,15 @@
 	 *	d7 = count down for the font's pixel count in height
 	 */
 		/* ASSERT: a0 = contents of Lconsole_font */
-	mulul	%a0@(FBCON_FONT_DESC_WIDTH),%d0
-	mulul	%a0@(FBCON_FONT_DESC_HEIGHT),%d1
-	movel	%a0@(FBCON_FONT_DESC_HEIGHT),%d7	/* Load fbcon_font_desc.height into d7 */
+	mulul	%a0@(FBCON_FONT_DESC_width),%d0
+	mulul	%a0@(FBCON_FONT_DESC_height),%d1
+	movel	%a0@(FBCON_FONT_DESC_height),%d7	/* Load fbcon_font_desc.height into d7 */
 	subq	#1,%d7
 console_read_char_scanline:
 	moveb	%a1@+,%d3
 
 		/* ASSERT: a0 = contents of Lconsole_font */
-	movel	%a0@(FBCON_FONT_DESC_WIDTH),%d6	/* Load fbcon_font_desc.width into d6 */
+	movel	%a0@(FBCON_FONT_DESC_width),%d6	/* Load fbcon_font_desc.width into d6 */
 	subql	#1,%d6
 
 console_do_font_scanline:
@@ -3249,15 +3503,15 @@
 	jbsr	console_plot_pixel
 	addq	#1,%d0
 	dbra	%d6,console_do_font_scanline
-
+	
 		/* ASSERT: a0 = contents of Lconsole_font */
-	subl	%a0@(FBCON_FONT_DESC_WIDTH),%d0
+	subl	%a0@(FBCON_FONT_DESC_width),%d0
 	addq	#1,%d1
 	dbra	%d7,console_read_char_scanline
-
-console_exit:
-
-func_return	console_putc
+	
+console_exit:		
+	moveml	%sp@+,%a0/%a1/%d0-%d7
+	rts
 
 console_plot_pixel:
 	/*
@@ -3268,12 +3522,12 @@
 	 *	All registers are preserved
 	 */
 	moveml	%a0-%a1/%d0-%d4,%sp@-
-
-	lea	%pc@(L(mac_videobase)),%a0
+	
+	lea	%pc@(SYMBOL_NAME(mac_videobase)),%a0
 	movel	%a0@,%a1
-	lea	%pc@(L(mac_videodepth)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_videodepth)),%a0
 	movel	%a0@,%d3
-	lea	%pc@(L(mac_rowbytes)),%a0
+	lea	%pc@(SYMBOL_NAME(mac_rowbytes)),%a0
 	mulul	%a0@,%d1
 
 	/*
@@ -3396,59 +3650,64 @@
  * It was still in the 2.1.77 head.S, so it's still here.
  * (And still not used!)
  */
-L(showtest):
+Lshowtest:
 	moveml	%a0/%d7,%sp@-
-	puts	"A="
-	putn	%a1
+	puts("A=")
+	putn(%a1)
 
 	.long	0xf0119f15		| ptestr	#5,%a1@,#7,%a0
 
-	puts	"DA="
-	putn	%a0
+	puts("DA=")
+	putn(%a0)
 
-	puts	"D="
-	putn	%a0@
+	puts("D=")
+	putn(%a0@)
 
-	puts	"S="
-	lea	%pc@(L(mmu)),%a0
+	puts("S=")
+	lea	%pc@(Lmmu),%a0
 	.long	0xf0106200		| pmove		%psr,%a0@
 	clrl	%d7
 	movew	%a0@,%d7
-	putn	%d7
+	jbsr	Lserial_putnum
 
-	putc	'\n'
+	putr()
 	moveml	%sp@+,%a0/%d7
 	rts
 #endif	/* 0 */
-
-__INITDATA
+	
+	.data
 	.align	4
-
-#if defined(CONFIG_ATARI) || defined(CONFIG_AMIGA) || defined(CONFIG_HP300)
-L(custom):
-L(iobase):
+	
+#if defined(CONFIG_ATARI) || defined(CONFIG_AMIGA)
+Lcustom:
+Liobase:
 	.long 0
 #endif
 
+#if defined(CONFIG_M68020) || defined(CONFIG_M68030)
+Lmmu:
+	.quad 0
+#endif
+
 #ifdef CONFIG_MAC
-L(console_video_virtual):
+Lconsole_video_virtual:
 	.long	0
 #endif	/* CONFIG_MAC */
 
 #if defined(CONSOLE)
-L(console_globals):
+Lconsole_globals:
 	.long	0		/* cursor column */
 	.long	0		/* cursor row */
 	.long	0		/* max num columns */
 	.long	0		/* max num rows */
 	.long	0		/* left edge */
 	.long	0		/* mac putc */
-L(console_font):
+Lconsole_font:
 	.long	0		/* pointer to console font (struct fbcon_font_desc) */
 #endif /* CONSOLE */
 
 #if defined(MMU_PRINT)
-L(mmu_print_data):
+Lmmu_print_data:
 	.long	0		/* valid flag */
 	.long	0		/* start logical */
 	.long	0		/* next logical */
@@ -3456,57 +3715,61 @@
 	.long	0		/* next physical */
 #endif /* MMU_PRINT */
 
-L(cputype):
+Lcputype:
 	.long	0
-L(mmu_cached_pointer_tables):
-	.long	0
-L(mmu_num_pointer_tables):
-	.long	0
-L(phys_kernel_start):
-	.long	0
-L(kernel_end):
-	.long	0
-L(memory_start):
+
+Lmmu_cached_page_tables:
 	.long	0
-L(kernel_pgdir_ptr):
+
+Lmmu_cached_pointer_tables:
 	.long	0
-L(temp_mmap_mem):
+
+Lmmu_num_page_tables:
 	.long	0
 
+Lmmu_num_pointer_tables:
+	.long	0
 
 #if defined (CONFIG_BVME6000)
 BVME_SCC_CTRL_A	= 0xffb0000b
 BVME_SCC_DATA_A	= 0xffb0000f
 #endif
 
+#if 0
+#if defined(CONFIG_ATARI)
+SYMBOL_NAME_LABEL(atari_mch_type)
+	 .long 0
+#endif
+#endif
+
 #if defined(CONFIG_MAC)
-L(mac_booter_data):
-	.long	0
-L(mac_videobase):
-	.long	0
-L(mac_videodepth):
-	.long	0
-L(mac_dimensions):
-	.long	0
-L(mac_rowbytes):
-	.long	0
+SYMBOL_NAME_LABEL(mac_booter_data)
+       .long 0
+SYMBOL_NAME_LABEL(mac_videobase)
+       .long 0
+SYMBOL_NAME_LABEL(mac_videodepth)
+       .long 0
+SYMBOL_NAME_LABEL(mac_dimensions)
+       .long 0
+SYMBOL_NAME_LABEL(mac_rowbytes)
+       .long 0
 #ifdef MAC_SERIAL_DEBUG
-L(mac_sccbase):
-	.long	0
+SYMBOL_NAME_LABEL(mac_sccbase)
+       .long 0
 #endif /* MAC_SERIAL_DEBUG */
 #endif
 
-__FINIT
-	.data
-	.align	4
-
+SYMBOL_NAME_LABEL(kpt)
+	.long 0
 SYMBOL_NAME_LABEL(availmem)
-	.long	0
+	.long 0
 SYMBOL_NAME_LABEL(m68k_pgtable_cachemode)
-	.long	0
+	.long 0
+#ifdef CONFIG_060_WRITETHROUGH
 SYMBOL_NAME_LABEL(m68k_supervisor_cachemode)
-	.long	0
+	.long 0
+#endif
 #if defined(CONFIG_MVME16x)
 SYMBOL_NAME_LABEL(mvme_bdid_ptr)
-	.long	0
+	.long 0
 #endif
--- linux-2.2.0pre7/arch/m68k/kernel/m68k_ksyms.c.rz	Sun Jan 31 15:19:54 1999
+++ linux-2.2.0pre7/arch/m68k/kernel/m68k_ksyms.c	Sun Jan 31 15:20:22 1999
@@ -40,8 +40,10 @@
 EXPORT_SYMBOL(mm_vtop_fallback);
 EXPORT_SYMBOL(m68k_realnum_memory);
 EXPORT_SYMBOL(m68k_memory);
+#if 0
 EXPORT_SYMBOL(__ioremap);
 EXPORT_SYMBOL(iounmap);
+#endif
 EXPORT_SYMBOL(m68k_debug_device);
 EXPORT_SYMBOL(dump_fpu);
 EXPORT_SYMBOL(dump_thread);
--- linux-2.2.0pre7/arch/m68k/mac/config.c.rz	Sun Jan 31 15:33:13 1999
+++ linux-2.2.0pre7/arch/m68k/mac/config.c	Sun Jan 31 15:35:27 1999
@@ -55,6 +55,9 @@
 
 void *mac_env;		/* Loaded by the boot asm */
 
+/* The logical video addr. determined by head.S - testing */
+extern unsigned long mac_videobase;
+
 /* The phys. video addr. - might be bogus on some machines */
 unsigned long mac_orig_videoaddr;
 
@@ -237,7 +240,13 @@
 	    mac_bi_data.id = *data;
 	    break;
 	case BI_MAC_VADDR:
+#if 1
+	    /* save booter supplied videobase; use the one mapped in head.S! */
+	    mac_orig_videoaddr = *data;
+	    mac_bi_data.videoaddr = mac_videobase;
+#else
 	    mac_bi_data.videoaddr = VIDEOMEMBASE + (*data & ~VIDEOMEMMASK);
+#endif
 	    break;
 	case BI_MAC_VDEPTH:
 	    mac_bi_data.videodepth = *data;
--- linux-2.2.0pre7/arch/m68k/mm/init.c.rz	Sun Jan 31 14:13:40 1999
+++ linux-2.2.0pre7/arch/m68k/mm/init.c	Sun Jan 31 14:14:41 1999
@@ -28,9 +28,8 @@
 #include <asm/atari_stram.h>
 #endif
 
-#undef DEBUG
-
 extern void die_if_kernel(char *,struct pt_regs *,long);
+extern void init_kpointer_table(void);
 extern void show_net_buffers(void);
 
 int do_check_pgt_cache(int low, int high)
@@ -123,14 +122,17 @@
 unsigned long mm_cachebits = 0;
 #endif
 
-static pte_t *__init kernel_page_table(unsigned long *memavailp)
+pte_t *kernel_page_table (unsigned long *memavailp)
 {
 	pte_t *ptablep;
 
-	ptablep = (pte_t *)*memavailp;
-	*memavailp += PAGE_SIZE;
+	if (memavailp) {
+		ptablep = (pte_t *)*memavailp;
+		*memavailp += PAGE_SIZE;
+	}
+	else
+		ptablep = (pte_t *)__get_free_page(GFP_KERNEL);
 
-	clear_page((unsigned long)ptablep);
 	flush_page_to_ram((unsigned long) ptablep);
 	flush_tlb_kernel_page((unsigned long) ptablep);
 	nocache_page ((unsigned long)ptablep);
@@ -138,164 +140,199 @@
 	return ptablep;
 }
 
-static pmd_t *last_pgtable __initdata = NULL;
-
-static pmd_t *__init kernel_ptr_table(unsigned long *memavailp)
+__initfunc(static unsigned long
+map_chunk (unsigned long addr, unsigned long size, unsigned long *memavailp))
 {
-	if (!last_pgtable) {
-		unsigned long pmd, last;
-		int i;
-
-		last = (unsigned long)kernel_pg_dir;
-		for (i = 0; i < PTRS_PER_PGD; i++) {
-			if (!pgd_val(kernel_pg_dir[i]))
-				continue;
-			pmd = pgd_page(kernel_pg_dir[i]);
-			if (pmd > last)
-				last = pmd;
-		}
+#define ONEMEG	(1024*1024)
+#define L3TREESIZE (256*1024)
 
-		last_pgtable = (pmd_t *)last;
-#ifdef DEBUG
-		printk("kernel_ptr_init: %p\n", last_pgtable);
-#endif
+	static unsigned long mem_mapped = 0;
+	static unsigned long virtaddr = 0;
+	static pte_t *ktablep = NULL;
+	unsigned long *kpointerp;
+	unsigned long physaddr;
+	extern pte_t *kpt;
+	int pindex;   /* index into pointer table */
+	pgd_t *page_dir = pgd_offset_k (virtaddr);
+
+	if (!pgd_present (*page_dir)) {
+		/* we need a new pointer table */
+		kpointerp = (unsigned long *) get_kpointer_table ();
+		pgd_set (page_dir, (pmd_t *) kpointerp);
+		memset (kpointerp, 0, PTRS_PER_PMD * sizeof (pmd_t));
 	}
+	else
+		kpointerp = (unsigned long *) pgd_page (*page_dir);
 
-	if (((unsigned long)(last_pgtable + PTRS_PER_PMD) & ~PAGE_MASK) == 0) {
-		last_pgtable = (pmd_t *)*memavailp;
-		*memavailp += PAGE_SIZE;
+	/*
+	 * pindex is the offset into the pointer table for the
+	 * descriptors for the current virtual address being mapped.
+	 */
+	pindex = (virtaddr >> 18) & 0x7f;
 
-		clear_page((unsigned long)last_pgtable);
-		flush_page_to_ram((unsigned long)last_pgtable);
-		flush_tlb_kernel_page((unsigned long)last_pgtable);
-		nocache_page((unsigned long)last_pgtable);
-	} else
-		last_pgtable += PTRS_PER_PMD;
+#ifdef DEBUG
+	printk ("mm=%ld, kernel_pg_dir=%p, kpointerp=%p, pindex=%d\n",
+		mem_mapped, kernel_pg_dir, kpointerp, pindex);
+#endif
 
-	return last_pgtable;
-}
+	/*
+	 * if this is running on an '040, we already allocated a page
+	 * table for the first 4M.  The address is stored in kpt by
+	 * arch/head.S
+	 *
+	 */
+	if (CPU_IS_040_OR_060 && mem_mapped == 0)
+		ktablep = kpt;
 
-static unsigned long __init
-map_chunk (unsigned long addr, long size, unsigned long *memavailp)
-{
-#define PTRTREESIZE (256*1024)
-#define ROOTTREESIZE (32*1024*1024)
-	static unsigned long virtaddr = 0;
-	unsigned long physaddr;
-	pgd_t *pgd_dir;
-	pmd_t *pmd_dir;
-	pte_t *pte_dir;
+	for (physaddr = addr;
+	     physaddr < addr + size;
+	     mem_mapped += L3TREESIZE, virtaddr += L3TREESIZE) {
 
-	physaddr = (addr | m68k_supervisor_cachemode |
-		    _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_DIRTY);
-	if (CPU_IS_040_OR_060)
-		physaddr |= _PAGE_GLOBAL040;
+#ifdef DEBUG
+		printk ("pa=%#lx va=%#lx ", physaddr, virtaddr);
+#endif
 
-	while (size > 0) {
+		if (pindex > 127 && mem_mapped >= 32*ONEMEG) {
+			/* we need a new pointer table every 32M */
 #ifdef DEBUG
-		if (!(virtaddr & (PTRTREESIZE-1)))
-			printk ("\npa=%#lx va=%#lx ", physaddr & PAGE_MASK,
-				virtaddr);
+			printk ("[new pointer]");
 #endif
-		pgd_dir = pgd_offset_k(virtaddr);
-		if (virtaddr && CPU_IS_020_OR_030) {
-			if (!(virtaddr & (ROOTTREESIZE-1)) &&
-			    size >= ROOTTREESIZE) {
+
+			kpointerp = (unsigned long *)get_kpointer_table ();
+			pgd_set(pgd_offset_k(virtaddr), (pmd_t *)kpointerp);
+			pindex = 0;
+		}
+
+		if (CPU_IS_040_OR_060) {
+			int i;
+			unsigned long ktable;
+
+			/* Don't map the first 4 MB again. The pagetables
+			 * for this range have already been initialized
+			 * in boot/head.S. Otherwise the pages used for
+			 * tables would be reinitialized to copyback mode.
+			 */
+
+			if (mem_mapped < 4 * ONEMEG)
+			{
 #ifdef DEBUG
-				printk ("[very early term]");
+				printk ("Already initialized\n");
 #endif
-				pgd_val(*pgd_dir) = physaddr;
-				size -= ROOTTREESIZE;
-				virtaddr += ROOTTREESIZE;
-				physaddr += ROOTTREESIZE;
+				physaddr += L3TREESIZE;
+				pindex++;
 				continue;
 			}
-		}
-		if (!pgd_present(*pgd_dir)) {
-			pmd_dir = kernel_ptr_table(memavailp);
 #ifdef DEBUG
-			printk ("[new pointer %p]", pmd_dir);
+			printk ("[setup table]");
 #endif
-			pgd_set(pgd_dir, pmd_dir);
-		} else
-			pmd_dir = pmd_offset(pgd_dir, virtaddr);
 
-		if (CPU_IS_020_OR_030) {
-			if (virtaddr) {
-#ifdef DEBUG
-				printk ("[early term]");
-#endif
-				pmd_dir->pmd[(virtaddr/PTRTREESIZE) & 15] = physaddr;
-				physaddr += PTRTREESIZE;
-			} else {
-				int i;
+			/*
+			 * 68040, use page tables pointed to by the
+			 * kernel pointer table.
+			 */
+
+			if ((pindex & 15) == 0) {
+				/* Need new page table every 4M on the '040 */
 #ifdef DEBUG
-				printk ("[zero map]");
+				printk ("[new table]");
 #endif
-				pte_dir = (pte_t *)kernel_ptr_table(memavailp);
-				pmd_dir->pmd[0] = virt_to_phys(pte_dir) |
-					_PAGE_TABLE | _PAGE_ACCESSED;
-				pte_val(*pte_dir++) = 0;
+				ktablep = kernel_page_table (memavailp);
+			}
+
+			ktable = virt_to_phys(ktablep);
+
+			/*
+			 * initialize section of the page table mapping
+			 * this 256K portion.
+			 */
+			for (i = 0; i < 64; i++) {
+				pte_val(ktablep[i]) = physaddr | _PAGE_PRESENT
+				  | m68k_supervisor_cachemode | _PAGE_GLOBAL040
+					| _PAGE_ACCESSED;
 				physaddr += PAGE_SIZE;
-				for (i = 1; i < 64; physaddr += PAGE_SIZE, i++)
-					pte_val(*pte_dir++) = physaddr;
 			}
-			size -= PTRTREESIZE;
-			virtaddr += PTRTREESIZE;
+			ktablep += 64;
+
+			/*
+			 * make the kernel pointer table point to the
+			 * kernel page table.  Each entries point to a
+			 * 64 entry section of the page table.
+			 */
+
+			kpointerp[pindex++] = ktable | _PAGE_TABLE | _PAGE_ACCESSED;
 		} else {
-			if (!pmd_present(*pmd_dir)) {
+			/*
+			 * 68030, use early termination page descriptors.
+			 * Each one points to 64 pages (256K).
+			 */
+#ifdef DEBUG
+			printk ("[early term] ");
+#endif
+			if (virtaddr == 0UL) {
+				/* map the first 256K using a 64 entry
+				 * 3rd level page table.
+				 * UNMAP the first entry to trap
+				 * zero page (NULL pointer) references
+				 */
+				int i;
+				unsigned long *tbl;
+				
+				tbl = (unsigned long *)get_kpointer_table();
+
+				kpointerp[pindex++] = virt_to_phys(tbl) | _PAGE_TABLE |_PAGE_ACCESSED;
+
+				for (i = 0; i < 64; i++, physaddr += PAGE_SIZE)
+					tbl[i] = physaddr | _PAGE_PRESENT | _PAGE_ACCESSED;
+				
+				/* unmap the zero page */
+				tbl[0] = 0;
+			} else {
+				/* not the first 256K */
+				kpointerp[pindex++] = physaddr | _PAGE_PRESENT | _PAGE_ACCESSED;
 #ifdef DEBUG
-				printk ("[new table]");
+				printk ("%lx=%lx ", virt_to_phys(&kpointerp[pindex-1]),
+					kpointerp[pindex-1]);
 #endif
-				pte_dir = kernel_page_table(memavailp);
-				pmd_set(pmd_dir, pte_dir);
+				physaddr += 64 * PAGE_SIZE;
 			}
-			pte_dir = pte_offset(pmd_dir, virtaddr);
-
-			if (virtaddr) {
-				if (!pte_present(*pte_dir))
-					pte_val(*pte_dir) = physaddr;
-			} else
-				pte_val(*pte_dir) = 0;
-			size -= PAGE_SIZE;
-			virtaddr += PAGE_SIZE;
-			physaddr += PAGE_SIZE;
 		}
-
-	}
 #ifdef DEBUG
-	printk("\n");
+		printk ("\n");
 #endif
+	}
 
-	return virtaddr;
+	return mem_mapped;
 }
 
 extern unsigned long free_area_init(unsigned long, unsigned long);
-extern void init_pointer_table(unsigned long ptable);
 
 /* References to section boundaries */
 
 extern char _text, _etext, _edata, __bss_start, _end;
 extern char __init_begin, __init_end;
 
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
 /*
  * paging_init() continues the virtual memory environment setup which
  * was begun by the code in arch/head.S.
  */
-unsigned long __init paging_init(unsigned long start_mem,
-				 unsigned long end_mem)
+__initfunc(unsigned long paging_init(unsigned long start_mem,
+				     unsigned long end_mem))
 {
 	int chunk;
 	unsigned long mem_avail = 0;
 
 #ifdef DEBUG
 	{
-		extern unsigned long availmem;
-		printk ("start of paging_init (%p, %lx, %lx, %lx)\n",
-			kernel_pg_dir, availmem, start_mem, end_mem);
+		extern pte_t *kpt;
+		printk ("start of paging_init (%p, %p, %lx, %lx, %lx)\n",
+			kernel_pg_dir, kpt, availmem, start_mem, end_mem);
 	}
 #endif
 
+	init_kpointer_table();
+
 	/* Fix the cache mode in the page descriptors for the 680[46]0.  */
 	if (CPU_IS_040_OR_060) {
 		int i;
@@ -329,7 +366,6 @@
 				       m68k_memory[chunk].size, &start_mem);
 
 	}
-
 	flush_tlb_all();
 #ifdef DEBUG
 	printk ("memory available is %ldKB\n", mem_avail >> 10);
@@ -349,16 +385,21 @@
 	start_mem += PAGE_SIZE;
 	memset((void *)empty_zero_page, 0, PAGE_SIZE);
 
+#if 0
 	/* 
 	 * allocate the "swapper" page directory and
 	 * record in task 0 (swapper) tss 
 	 */
-	init_mm.pgd = (pgd_t *)kernel_ptr_table(&start_mem);
-	memset (init_mm.pgd, 0, sizeof(pgd_t)*PTRS_PER_PGD);
+	swapper_pg_dir = (pgd_t *)get_kpointer_table();
+
+	init_mm.pgd = swapper_pg_dir;
+#endif
+
+	memset (swapper_pg_dir, 0, sizeof(pgd_t)*PTRS_PER_PGD);
 
 	/* setup CPU root pointer for swapper task */
 	task[0]->tss.crp[0] = 0x80000000 | _PAGE_TABLE;
-	task[0]->tss.crp[1] = virt_to_phys(init_mm.pgd);
+	task[0]->tss.crp[1] = virt_to_phys (swapper_pg_dir);
 
 #ifdef DEBUG
 	printk ("task 0 pagedir at %p virt, %#lx phys\n",
@@ -389,16 +430,16 @@
 #ifdef DEBUG
 	printk ("before free_area_init\n");
 #endif
-	return PAGE_ALIGN(free_area_init(start_mem, end_mem));
+
+	return PAGE_ALIGN(free_area_init (start_mem, end_mem));
 }
 
-void __init mem_init(unsigned long start_mem, unsigned long end_mem)
+__initfunc(void mem_init(unsigned long start_mem, unsigned long end_mem))
 {
 	int codepages = 0;
 	int datapages = 0;
 	int initpages = 0;
 	unsigned long tmp;
-	int i;
 
 	end_mem &= PAGE_MASK;
 	high_memory = (void *) end_mem;
@@ -439,14 +480,6 @@
 #endif
 			free_page(tmp);
 	}
-
-	/* insert pointer tables allocated so far into the tablelist */
-	init_pointer_table((unsigned long)kernel_pg_dir);
-	for (i = 0; i < PTRS_PER_PGD; i++) {
-		if (pgd_val(kernel_pg_dir[i]))
-			init_pointer_table(pgd_page(kernel_pg_dir[i]));
-	}
-
 	printk("Memory: %luk/%luk available (%dk kernel code, %dk data, %dk init)\n",
 	       (unsigned long) nr_free_pages << (PAGE_SHIFT-10),
 	       max_mapnr << (PAGE_SHIFT-10),
--- linux-2.2.0pre7/arch/m68k/mm/kmap.c.rz	Sun Jan 31 14:13:55 1999
+++ linux-2.2.0pre7/arch/m68k/mm/kmap.c	Sun Jan 31 14:14:52 1999
@@ -2,9 +2,6 @@
  *  linux/arch/m68k/mm/kmap.c
  *
  *  Copyright (C) 1997 Roman Hodek
- *
- *  10/01/99 cleaned up the code and changing to the same interface
- *	     used by other architectures		/Roman Zippel
  */
 
 #include <linux/mm.h>
@@ -12,88 +9,250 @@
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/malloc.h>
-#include <linux/vmalloc.h>
 
 #include <asm/setup.h>
 #include <asm/segment.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
-#include <asm/io.h>
 #include <asm/system.h>
 
-#undef DEBUG
 
-#define PTRTREESIZE	(256*1024)
+extern pte_t *kernel_page_table (unsigned long *memavailp);
+
+/* Granularity of kernel_map() allocations */
+#define KMAP_STEP	(256*1024)
+
+/* Size of pool of KMAP structures; that is needed, because kernel_map() can
+ * be called at times where kmalloc() isn't initialized yet. */
+#define	KMAP_POOL_SIZE	16
+
+/* structure for maintainance of kmap regions */
+typedef struct kmap {
+	struct kmap *next, *prev;	/* linking of list */
+	unsigned long addr;			/* start address of region */
+	unsigned long mapaddr;		/* address returned to user */
+	unsigned long size;			/* size of region */
+	unsigned free : 1;			/* flag whether free or allocated */
+	unsigned kmalloced : 1;		/* flag whether got this from kmalloc() */
+	unsigned pool_alloc : 1;	/* flag whether got this is alloced in pool */
+} KMAP;
+
+KMAP kmap_pool[KMAP_POOL_SIZE] = {
+	{ NULL, NULL, KMAP_START, KMAP_START, KMAP_END-KMAP_START, 1, 0, 1 },
+	{ NULL, NULL, 0, 0, 0, 0, 0, 0 },
+};
 
 /*
- * For 040/060 we can use the virtual memory area like other architectures,
- * but for 020/030 we want to use early termination page descriptor and we
- * can't mix this with normal page descriptors, so we have to copy that code
- * (mm/vmalloc.c) and return appriorate aligned addresses.
+ * anchor of kmap region list
+ *
+ * The list is always ordered by addresses, and regions are always adjacent,
+ * i.e. there must be no holes between them!
  */
+KMAP *kmap_regions = &kmap_pool[0];
+
+/* for protecting the kmap_regions list against races */
+static struct semaphore kmap_sem = MUTEX;
 
-#ifdef CPU_M68040_OR_M68060_ONLY
 
-#define IO_SIZE		PAGE_SIZE
 
-static inline struct vm_struct *get_io_area(unsigned long size)
+/*
+ * Low-level allocation and freeing of KMAP structures
+ */
+static KMAP *alloc_kmap( int use_kmalloc )
 {
-	return get_vm_area(size);
-}
+	KMAP *p;
+	int i;
 
+	/* first try to get from the pool if possible */
+	for( i = 0; i < KMAP_POOL_SIZE; ++i ) {
+		if (!kmap_pool[i].pool_alloc) {
+			kmap_pool[i].kmalloced = 0;
+			kmap_pool[i].pool_alloc = 1;
+			return( &kmap_pool[i] );
+		}
+	}
+	
+	if (use_kmalloc && (p = (KMAP *)kmalloc( sizeof(KMAP), GFP_KERNEL ))) {
+		p->kmalloced = 1;
+		return( p );
+	}
+	
+	return( NULL );
+}
 
-static inline void free_io_area(void *addr)
+static void free_kmap( KMAP *p )
 {
-	return vfree((void *)(PAGE_MASK & (unsigned long)addr));
+	if (p->kmalloced)
+		kfree( p );
+	else
+		p->pool_alloc = 0;
 }
 
-#else
 
-#define IO_SIZE		(256*1024)
+/*
+ * Get a free region from the kmap address range
+ */
+static KMAP *kmap_get_region( unsigned long size, int use_kmalloc )
+{
+	KMAP *p, *q;
+
+	/* look for a suitable free region */
+	for( p = kmap_regions; p; p = p->next )
+		if (p->free && p->size >= size)
+			break;
+	if (!p) {
+		printk( KERN_ERR "kernel_map: address space for "
+				"allocations exhausted\n" );
+		return( NULL );
+	}
+	
+	if (p->size > size) {
+		/* if free region is bigger than we need, split off the rear free part
+		 * into a new region */
+		if (!(q = alloc_kmap( use_kmalloc ))) {
+			printk( KERN_ERR "kernel_map: out of memory\n" );
+			return( NULL );
+		}
+		q->addr = p->addr + size;
+		q->size = p->size - size;
+		p->size = size;
+		q->free = 1;
+
+		q->prev = p;
+		q->next = p->next;
+		p->next = q;
+		if (q->next) q->next->prev = q;
+	}
+	
+	p->free = 0;
+	return( p );
+}
 
-static struct vm_struct *iolist = NULL;
 
-static struct vm_struct *get_io_area(unsigned long size)
+/*
+ * Free a kernel_map region again
+ */
+static void kmap_put_region( KMAP *p )
 {
-	unsigned long addr;
-	struct vm_struct **p, *tmp, *area;
+	KMAP *q;
 
-	area = (struct vm_struct *)kmalloc(sizeof(*area), GFP_KERNEL);
-	if (!area)
-		return NULL;
-	addr = KMAP_START;
-	for (p = &iolist; (tmp = *p) ; p = &tmp->next) {
-		if (size + addr < (unsigned long)tmp->addr)
-			break;
-		if (addr > KMAP_END-size)
+	p->free = 1;
+
+	/* merge with previous region if possible */
+	q = p->prev;
+	if (q && q->free) {
+		if (q->addr + q->size != p->addr) {
+			printk( KERN_ERR "kernel_malloc: allocation list destroyed\n" );
+			return;
+		}
+		q->size += p->size;
+		q->next = p->next;
+		if (p->next) p->next->prev = q;
+		free_kmap( p );
+		p = q;
+	}
+
+	/* merge with following region if possible */
+	q = p->next;
+	if (q && q->free) {
+		if (p->addr + p->size != q->addr) {
+			printk( KERN_ERR "kernel_malloc: allocation list destroyed\n" );
+			return;
+		}
+		p->size += q->size;
+		p->next = q->next;
+		if (q->next) q->next->prev = p;
+		free_kmap( q );
+	}
+}
+
+
+/*
+ * kernel_map() helpers
+ */
+static inline pte_t *
+pte_alloc_kernel_map(pmd_t *pmd, unsigned long address,
+		     unsigned long *memavailp)
+{
+	address = (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+	if (pmd_none(*pmd)) {
+		pte_t *page = kernel_page_table(memavailp);
+		if (pmd_none(*pmd)) {
+			if (page) {
+				pmd_set(pmd, page);
+				memset( page, 0, PAGE_SIZE );
+				return page + address;
+			}
+			pmd_set(pmd, BAD_PAGETABLE);
 			return NULL;
-		addr = tmp->size + (unsigned long)tmp->addr;
+		}
+		if (memavailp)
+			panic("kernel_map: slept during init?!?");
+		cache_page((unsigned long) page);
+		free_page((unsigned long) page);
+	}
+	if (pmd_bad(*pmd)) {
+		printk( KERN_ERR "Bad pmd in pte_alloc_kernel_map: %08lx\n",
+		       pmd_val(*pmd));
+		pmd_set(pmd, BAD_PAGETABLE);
+		return NULL;
 	}
-	area->addr = (void *)addr;
-	area->size = size + IO_SIZE;
-	area->next = *p;
-	*p = area;
-	return area;
+	return (pte_t *) pmd_page(*pmd) + address;
 }
 
-static inline void free_io_area(void *addr)
+static inline void
+kernel_map_pte(pte_t *pte, unsigned long address, unsigned long size,
+	       unsigned long phys_addr, pgprot_t prot)
 {
-	struct vm_struct **p, *tmp;
+	unsigned long end;
 
-	if (!addr)
-		return;
-	addr = (void *)((unsigned long)addr & -IO_SIZE);
-	for (p = &iolist ; (tmp = *p) ; p = &tmp->next) {
-		if (tmp->addr == addr) {
-			*p = tmp->next;
-			__iounmap(tmp->addr, tmp->size);
-			kfree(tmp);
-			return;
-		}
+	address &= ~PMD_MASK;
+	end = address + size;
+	if (end > PMD_SIZE)
+		end = PMD_SIZE;
+	do {
+		pte_val(*pte) = phys_addr + pgprot_val(prot);
+		address += PAGE_SIZE;
+		phys_addr += PAGE_SIZE;
+		pte++;
+	} while (address < end);
+}
+
+static inline int
+kernel_map_pmd (pmd_t *pmd, unsigned long address, unsigned long size,
+		unsigned long phys_addr, pgprot_t prot,
+		unsigned long *memavailp)
+{
+	unsigned long end;
+
+	address &= ~PGDIR_MASK;
+	end = address + size;
+	if (end > PGDIR_SIZE)
+		end = PGDIR_SIZE;
+	phys_addr -= address;
+
+	if (CPU_IS_040_OR_060) {
+		do {
+			pte_t *pte = pte_alloc_kernel_map(pmd, address, memavailp);
+			if (!pte)
+				return -ENOMEM;
+			kernel_map_pte(pte, address, end - address,
+				       address + phys_addr, prot);
+			address = (address + PMD_SIZE) & PMD_MASK;
+			pmd++;
+		} while (address < end);
+	} else {
+		/* On the 68030 we use early termination page descriptors.
+		   Each one points to 64 pages (256K). */
+		int i = (address >> (PMD_SHIFT-4)) & 15;
+		do {
+			(&pmd_val(*pmd))[i++] = (address + phys_addr) | pgprot_val(prot);
+			address += PMD_SIZE / 16;
+		} while (address < end);
 	}
+	return 0;
 }
 
-#endif
 
 /*
  * Map some physical address range into the kernel address space. The
@@ -101,245 +260,304 @@
  */
 /* Rewritten by Andreas Schwab to remove all races. */
 
-void *__ioremap(unsigned long physaddr, unsigned long size, int cacheflag)
+unsigned long kernel_map(unsigned long phys_addr, unsigned long size,
+			 int cacheflag, unsigned long *memavailp)
 {
-	struct vm_struct *area;
-	unsigned long virtaddr, retaddr;
-	long offset;
-	pgd_t *pgd_dir;
-	pmd_t *pmd_dir;
-	pte_t *pte_dir;
-
-	/*
-	 * Don't allow mappings that wrap..
-	 */
-	if (!size || size > physaddr + size)
-		return NULL;
-
-#ifdef DEBUG
-	printk("ioremap: 0x%lx,0x%lx(%d) - ", physaddr, size, cacheflag);
-#endif
-	/*
-	 * Mappings have to be aligned
-	 */
-	offset = physaddr & (IO_SIZE - 1);
-	physaddr &= -IO_SIZE;
-	size = (size + offset + IO_SIZE - 1) & -IO_SIZE;
-
-	/*
-	 * Ok, go for it..
-	 */
-	area = get_io_area(size);
-	if (!area)
-		return NULL;
+	unsigned long retaddr, from, end;
+	pgd_t *dir;
+	pgprot_t prot;
+	KMAP *kmap;
+
+	/* Round down 'phys_addr' to 256 KB and adjust size */
+	retaddr = phys_addr & (KMAP_STEP-1);
+	size += retaddr;
+	phys_addr &= ~(KMAP_STEP-1);
+	/* Round up the size to 256 KB. It doesn't hurt if too much is
+	   mapped... */
+	size = (size + KMAP_STEP - 1) & ~(KMAP_STEP-1);
+	
+	down( &kmap_sem );
+	kmap = kmap_get_region(size, memavailp == NULL);
+	if (!kmap) {
+		up(&kmap_sem);
+		return 0;
+	}
+	from = kmap->addr;
+	retaddr += from;
+	kmap->mapaddr = retaddr;
+	end = from + size;
+	up( &kmap_sem );
 
-	virtaddr = (unsigned long)area->addr;
-	retaddr = virtaddr + offset;
-#ifdef DEBUG
-	printk("0x%lx,0x%lx,0x%lx", physaddr, virtaddr, retaddr);
-#endif
-
-	/*
-	 * add cache and table flags to physical address
-	 */
 	if (CPU_IS_040_OR_060) {
-		physaddr |= (_PAGE_PRESENT | _PAGE_GLOBAL040 |
-			     _PAGE_ACCESSED | _PAGE_DIRTY);
+		pgprot_val(prot) = (_PAGE_PRESENT | _PAGE_GLOBAL040 |
+				    _PAGE_ACCESSED | _PAGE_DIRTY);
 		switch (cacheflag) {
-		case IOMAP_FULL_CACHING:
-			physaddr |= _PAGE_CACHE040;
+		case KERNELMAP_FULL_CACHING:
+			pgprot_val(prot) |= _PAGE_CACHE040;
 			break;
-		case IOMAP_NOCACHE_SER:
+		case KERNELMAP_NOCACHE_SER:
 		default:
-			physaddr |= _PAGE_NOCACHE_S;
+			pgprot_val(prot) |= _PAGE_NOCACHE_S;
 			break;
-		case IOMAP_NOCACHE_NONSER:
-			physaddr |= _PAGE_NOCACHE;
+		case KERNELMAP_NOCACHE_NONSER:
+			pgprot_val(prot) |= _PAGE_NOCACHE;
 			break;
-		case IOMAP_WRITETHROUGH:
-			physaddr |= _PAGE_CACHE040W;
+		case KERNELMAP_NO_COPYBACK:
+			pgprot_val(prot) |= _PAGE_CACHE040W;
 			break;
 		}
-	} else {
-		physaddr |= (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_DIRTY);
-		switch (cacheflag) {
-		case IOMAP_NOCACHE_SER:
-		case IOMAP_NOCACHE_NONSER:
-		default:
-			physaddr |= _PAGE_NOCACHE030;
-			break;
-		case IOMAP_FULL_CACHING:
-		case IOMAP_WRITETHROUGH:
-			break;
+	} else
+		pgprot_val(prot) = (_PAGE_PRESENT | _PAGE_ACCESSED |
+				    _PAGE_DIRTY |
+				    ((cacheflag == KERNELMAP_FULL_CACHING ||
+				      cacheflag == KERNELMAP_NO_COPYBACK)
+				     ? 0 : _PAGE_NOCACHE030));
+
+	phys_addr -= from;
+	dir = pgd_offset_k(from);
+	while (from < end) {
+		pmd_t *pmd = pmd_alloc_kernel(dir, from);
+
+		if (kernel_map_pmd(pmd, from, end - from, phys_addr + from,
+				   prot, memavailp)) {
+			printk( KERN_ERR "kernel_map: out of memory\n" );
+			return 0UL;
 		}
+		from = (from + PGDIR_SIZE) & PGDIR_MASK;
+		dir++;
 	}
 
-	while (size > 0) {
-#ifdef DEBUG
-		if (!(virtaddr & (PTRTREESIZE-1)))
-			printk ("\npa=%#lx va=%#lx ", physaddr, virtaddr);
-#endif
-		pgd_dir = pgd_offset_k(virtaddr);
-		pmd_dir = pmd_alloc_kernel(pgd_dir, virtaddr);
-		if (!pmd_dir) {
-			printk("ioremap: no mem for pmd_dir\n");
-			return NULL;
-		}
+	return retaddr;
+}
 
-		if (CPU_IS_020_OR_030) {
-			pmd_dir->pmd[(virtaddr/PTRTREESIZE)&-16] = physaddr;
-			physaddr += PTRTREESIZE;
-			virtaddr += PTRTREESIZE;
-			size -= PTRTREESIZE;
-		} else {
-			pte_dir = pte_alloc_kernel(pmd_dir, virtaddr);
-			if (!pte_dir) {
-				printk("ioremap: no mem for pte_dir\n");
-				return NULL;
-			}
 
-			pte_val(*pte_dir) = physaddr;
-			virtaddr += PAGE_SIZE;
-			physaddr += PAGE_SIZE;
-			size -= PAGE_SIZE;
+/*
+ * kernel_unmap() helpers
+ */
+static inline void pte_free_kernel_unmap( pmd_t *pmd )
+{
+	unsigned long page = pmd_page(*pmd);
+	mem_map_t *pagemap = &mem_map[MAP_NR(page)];
+	
+	pmd_clear(pmd);
+	cache_page(page);
+
+	if (PageReserved( pagemap )) {
+		/* need to unreserve pages that were allocated with memavailp != NULL;
+		 * this works only if 'page' is page-aligned */
+		if (page & ~PAGE_MASK)
+			return;
+		clear_bit( PG_reserved, &pagemap->flags );
+		atomic_set( &pagemap->count, 1 );
+	}
+	free_page( page );
+}
+
+/*
+ * This not only unmaps the requested region, but also loops over the whole
+ * pmd to determine whether the other pte's are clear (so that the page can be
+ * freed.) If so, it returns 1, 0 otherwise.
+ */
+static inline int
+kernel_unmap_pte_range(pmd_t * pmd, unsigned long address, unsigned long size)
+{
+	pte_t *pte;
+	unsigned long addr2, end, end2;
+	int all_clear = 1;
+
+	if (pmd_none(*pmd))
+		return( 0 );
+	if (pmd_bad(*pmd)) {
+		printk( KERN_ERR "kernel_unmap_pte_range: bad pmd (%08lx)\n",
+				pmd_val(*pmd) );
+		pmd_clear(pmd);
+		return( 0 );
+	}
+	address &= ~PMD_MASK;
+	addr2 = 0;
+	pte = pte_offset(pmd, addr2);
+	end = address + size;
+	if (end > PMD_SIZE)
+		end = PMD_SIZE;
+	end2 = addr2 + PMD_SIZE;
+	while( addr2 < end2 ) {
+		if (!pte_none(*pte)) {
+			if (address <= addr2 && addr2 < end)
+				pte_clear(pte);
+			else
+				all_clear = 0;
 		}
+		++pte;
+		addr2 += PAGE_SIZE;
 	}
-#ifdef DEBUG
-	printk("\n");
-#endif
-	flush_tlb_all();
+	return( all_clear );
+}
 
-	return (void *)retaddr;
+static inline void
+kernel_unmap_pmd_range(pgd_t * dir, unsigned long address, unsigned long size)
+{
+	pmd_t * pmd;
+	unsigned long end;
+
+	if (pgd_none(*dir))
+		return;
+	if (pgd_bad(*dir)) {
+		printk( KERN_ERR "kernel_unmap_pmd_range: bad pgd (%08lx)\n",
+				pgd_val(*dir) );
+		pgd_clear(dir);
+		return;
+	}
+	pmd = pmd_offset(dir, address);
+	address &= ~PGDIR_MASK;
+	end = address + size;
+	if (end > PGDIR_SIZE)
+		end = PGDIR_SIZE;
+	
+	if (CPU_IS_040_OR_060) {
+		do {
+			if (kernel_unmap_pte_range(pmd, address, end - address))
+				pte_free_kernel_unmap( pmd );
+			address = (address + PMD_SIZE) & PMD_MASK;
+			pmd++;
+		} while (address < end);
+	} else {
+		/* On the 68030 clear the early termination descriptors */
+		int i = (address >> (PMD_SHIFT-4)) & 15;
+		do {
+			(&pmd_val(*pmd))[i++] = 0;
+			address += PMD_SIZE / 16;
+		} while (address < end);
+	}
 }
 
 /*
- * Unmap a ioremap()ed region again
+ * Unmap a kernel_map()ed region again
  */
-void iounmap(void *addr)
+void kernel_unmap( unsigned long addr )
 {
-	free_io_area(addr);
+	unsigned long end;
+	pgd_t *dir;
+	KMAP *p;
+
+	down( &kmap_sem );
+	
+	/* find region for 'addr' in list; must search for mapaddr! */
+	for( p = kmap_regions; p; p = p->next )
+		if (!p->free && p->mapaddr == addr)
+			break;
+	if (!p) {
+		printk( KERN_ERR "kernel_unmap: trying to free invalid region\n" );
+		return;
+	}
+	addr = p->addr;
+	end = addr + p->size;
+	kmap_put_region( p );
+
+	dir = pgd_offset_k( addr );
+	while( addr < end ) {
+		kernel_unmap_pmd_range( dir, addr, end - addr );
+		addr = (addr + PGDIR_SIZE) & PGDIR_MASK;
+		dir++;
+	}
+	
+	up( &kmap_sem );
+	/* flushing for a range would do, but there's no such function for kernel
+	 * address space... */
+	flush_tlb_all();
 }
 
+
 /*
- * __iounmap unmaps nearly everything, so be careful
- * it doesn't free currently pointer/page tables anymore but it
- * wans't used anyway and might be added later.
+ * kernel_set_cachemode() helpers
  */
-void __iounmap(void *addr, unsigned long size)
-{
-	unsigned long virtaddr = (unsigned long)addr;
-	pgd_t *pgd_dir;
-	pmd_t *pmd_dir;
-	pte_t *pte_dir;
+static inline void set_cmode_pte( pmd_t *pmd, unsigned long address,
+				  unsigned long size, unsigned cmode )
+{	pte_t *pte;
+	unsigned long end;
 
-	while (size > 0) {
-		pgd_dir = pgd_offset_k(virtaddr);
-		if (pgd_bad(*pgd_dir)) {
-			printk("iounmap: bad pgd(%08lx)\n", pgd_val(*pgd_dir));
-			pgd_clear(pgd_dir);
-			return;
-		}
-		pmd_dir = pmd_offset(pgd_dir, virtaddr);
+	if (pmd_none(*pmd))
+		return;
+
+	pte = pte_offset( pmd, address );
+	address &= ~PMD_MASK;
+	end = address + size;
+	if (end >= PMD_SIZE)
+		end = PMD_SIZE;
+
+	for( ; address < end; pte++ ) {
+		pte_val(*pte) = (pte_val(*pte) & ~_PAGE_NOCACHE) | cmode;
+		address += PAGE_SIZE;
+	}
+}
 
-		if (CPU_IS_020_OR_030) {
-			int pmd_off = (virtaddr/PTRTREESIZE) & -16;
 
-			if ((pmd_dir->pmd[pmd_off] & _DESCTYPE_MASK) == _PAGE_PRESENT) {
-				pmd_dir->pmd[pmd_off] = 0;
-				virtaddr += PTRTREESIZE;
-				size -= PTRTREESIZE;
-				continue;
-			}
-		}
+static inline void set_cmode_pmd( pgd_t *dir, unsigned long address,
+				  unsigned long size, unsigned cmode )
+{
+	pmd_t *pmd;
+	unsigned long end;
 
-		if (pmd_bad(*pmd_dir)) {
-			printk("iounmap: bad pmd (%08lx)\n", pmd_val(*pmd_dir));
-			pmd_clear(pmd_dir);
-			return;
-		}
-		pte_dir = pte_offset(pmd_dir, virtaddr);
+	if (pgd_none(*dir))
+		return;
 
-		pte_val(*pte_dir) = 0;
-		virtaddr += PAGE_SIZE;
-		size -= PAGE_SIZE;
+	pmd = pmd_offset( dir, address );
+	address &= ~PGDIR_MASK;
+	end = address + size;
+	if (end > PGDIR_SIZE)
+		end = PGDIR_SIZE;
+
+	if ((pmd_val(*pmd) & _DESCTYPE_MASK) == _PAGE_PRESENT) {
+		/* 68030 early termination descriptor */
+		pmd_val(*pmd) = (pmd_val(*pmd) & ~_PAGE_NOCACHE) | cmode;
+		return;
+	}
+	else {
+		/* "normal" tables */
+		for( ; address < end; pmd++ ) {
+			set_cmode_pte( pmd, address, end - address, cmode );
+			address = (address + PMD_SIZE) & PMD_MASK;
+		}
 	}
-
-	flush_tlb_all();
 }
 
+
 /*
  * Set new cache mode for some kernel address space.
  * The caller must push data for that range itself, if such data may already
  * be in the cache.
  */
-void kernel_set_cachemode(void *addr, unsigned long size, int cmode)
+void kernel_set_cachemode( unsigned long address, unsigned long size,
+						   unsigned cmode )
 {
-	unsigned long virtaddr = (unsigned long)addr;
-	pgd_t *pgd_dir;
-	pmd_t *pmd_dir;
-	pte_t *pte_dir;
-
+	pgd_t *dir = pgd_offset_k( address );
+	unsigned long end = address + size;
+	
 	if (CPU_IS_040_OR_060) {
-		switch (cmode) {
-		case IOMAP_FULL_CACHING:
+		switch( cmode ) {
+		  case KERNELMAP_FULL_CACHING:
 			cmode = _PAGE_CACHE040;
 			break;
-		case IOMAP_NOCACHE_SER:
-		default:
+		  case KERNELMAP_NOCACHE_SER:
+		  default:
 			cmode = _PAGE_NOCACHE_S;
 			break;
-		case IOMAP_NOCACHE_NONSER:
+		  case KERNELMAP_NOCACHE_NONSER:
 			cmode = _PAGE_NOCACHE;
 			break;
-		case IOMAP_WRITETHROUGH:
+		  case KERNELMAP_NO_COPYBACK:
 			cmode = _PAGE_CACHE040W;
 			break;
 		}
-	} else {
-		switch (cmode) {
-		case IOMAP_NOCACHE_SER:
-		case IOMAP_NOCACHE_NONSER:
-		default:
-			cmode = _PAGE_NOCACHE030;
-			break;
-		case IOMAP_FULL_CACHING:
-		case IOMAP_WRITETHROUGH:
-			cmode = 0;
-		}
-	}
-
-	while (size > 0) {
-		pgd_dir = pgd_offset_k(virtaddr);
-		if (pgd_bad(*pgd_dir)) {
-			printk("iocachemode: bad pgd(%08lx)\n", pgd_val(*pgd_dir));
-			pgd_clear(pgd_dir);
-			return;
-		}
-		pmd_dir = pmd_offset(pgd_dir, virtaddr);
-
-		if (CPU_IS_020_OR_030) {
-			int pmd_off = (virtaddr/PTRTREESIZE) & -16;
-
-			if ((pmd_dir->pmd[pmd_off] & _DESCTYPE_MASK) == _PAGE_PRESENT) {
-				pmd_dir->pmd[pmd_off] = (pmd_dir->pmd[pmd_off] &
-							 _CACHEMASK040) | cmode;
-				virtaddr += PTRTREESIZE;
-				size -= PTRTREESIZE;
-				continue;
-			}
-		}
-
-		if (pmd_bad(*pmd_dir)) {
-			printk("iocachemode: bad pmd (%08lx)\n", pmd_val(*pmd_dir));
-			pmd_clear(pmd_dir);
-			return;
-		}
-		pte_dir = pte_offset(pmd_dir, virtaddr);
-
-		pte_val(*pte_dir) = (pte_val(*pte_dir) & _CACHEMASK040) | cmode;
-		virtaddr += PAGE_SIZE;
-		size -= PAGE_SIZE;
+	} else
+		cmode = ((cmode == KERNELMAP_FULL_CACHING ||
+				  cmode == KERNELMAP_NO_COPYBACK)    ?
+			 0 : _PAGE_NOCACHE030);
+
+	for( ; address < end; dir++ ) {
+		set_cmode_pmd( dir, address, end - address, cmode );
+		address = (address + PGDIR_SIZE) & PGDIR_MASK;
 	}
-
+	/* flushing for a range would do, but there's no such function for kernel
+	 * address space... */
 	flush_tlb_all();
 }
--- linux-2.2.0pre7/arch/m68k/mm/memory.c.rz	Sun Jan 31 14:14:14 1999
+++ linux-2.2.0pre7/arch/m68k/mm/memory.c	Sun Jan 31 14:15:00 1999
@@ -10,7 +10,6 @@
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/malloc.h>
-#include <linux/init.h>
 
 #include <asm/setup.h>
 #include <asm/segment.h>
@@ -98,31 +97,6 @@
 
 #define PTABLE_SIZE (PTRS_PER_PMD * sizeof(pmd_t))
 
-void __init init_pointer_table(unsigned long ptable)
-{
-	ptable_desc *dp;
-	unsigned long page = ptable & PAGE_MASK;
-	unsigned char mask = 1 << ((ptable - page)/PTABLE_SIZE);
-
-	dp = PAGE_PD(page);
-	if (!(PD_MARKBITS(dp) & mask)) {
-		PD_MARKBITS(dp) = 0xff;
-		(dp->prev = ptable_list.prev)->next = dp;
-		(dp->next = &ptable_list)->prev = dp;
-	}
-
-	PD_MARKBITS(dp) &= ~mask;
-#ifdef DEBUG
-	printk("init_pointer_table: %lx, %x\n", ptable, PD_MARKBITS(dp));
-#endif
-
-	/* unreserve the page so it's possible to free that page */
-	dp->flags &= ~(1 << PG_reserved);
-	atomic_set(&dp->count, 1);
-
-	return;
-}
-
 pmd_t *get_pointer_table (void)
 {
 	ptable_desc *dp = ptable_list.next;
@@ -202,6 +176,103 @@
 	return 0;
 }
 
+/* maximum pages used for kpointer tables */
+#define KPTR_PAGES      4
+/* # of reserved slots */
+#define RESERVED_KPTR	4
+extern pmd_tablepage kernel_pmd_table; /* reserved in head.S */
+
+static struct kpointer_pages {
+        pmd_tablepage *page[KPTR_PAGES];
+        u_char alloced[KPTR_PAGES];
+} kptr_pages;
+
+void init_kpointer_table(void) {
+	short i = KPTR_PAGES-1;
+
+	/* first page is reserved in head.S */
+	kptr_pages.page[i] = &kernel_pmd_table;
+	kptr_pages.alloced[i] = ~(0xff>>RESERVED_KPTR);
+	for (i--; i>=0; i--) {
+		kptr_pages.page[i] = NULL;
+		kptr_pages.alloced[i] = 0;
+	}
+}
+
+pmd_t *get_kpointer_table (void)
+{
+	/* For pointer tables for the kernel virtual address space,
+	 * use the page that is reserved in head.S that can hold up to
+	 * 8 pointer tables. 3 of these tables are always reserved
+	 * (kernel_pg_dir, swapper_pg_dir and kernel pointer table for
+	 * the first 16 MB of RAM). In addition, the 4th pointer table
+	 * in this page is reserved. On Amiga and Atari, it is used to
+	 * map in the hardware registers. It may be used for other
+	 * purposes on other 68k machines. This leaves 4 pointer tables
+	 * available for use by the kernel. 1 of them are usually used
+	 * for the vmalloc tables. This allows mapping of 3 * 32 = 96 MB
+	 * of physical memory. But these pointer tables are also used
+	 * for other purposes, like kernel_map(), so further pages can
+	 * now be allocated.
+	 */
+	pmd_tablepage *page;
+	pmd_table *table;
+	long nr, offset = -8;
+	short i;
+
+	for (i=KPTR_PAGES-1; i>=0; i--) {
+		asm volatile("bfffo %1{%2,#8},%0"
+			: "=d" (nr)
+			: "d" ((u_char)~kptr_pages.alloced[i]), "d" (offset));
+		if (nr)
+			break;
+	}
+	if (i < 0) {
+		printk("No space for kernel pointer table!\n");
+		return NULL;
+	}
+	if (!(page = kptr_pages.page[i])) {
+		if (!(page = (pmd_tablepage *)get_free_page(GFP_KERNEL))) {
+			printk("No space for kernel pointer table!\n");
+			return NULL;
+		}
+		flush_tlb_kernel_page((unsigned long) page);
+		nocache_page((u_long)(kptr_pages.page[i] = page));
+	}
+	asm volatile("bfset %0@{%1,#1}"
+		: /* no output */
+		: "a" (&kptr_pages.alloced[i]), "d" (nr-offset));
+	table = &(*page)[nr-offset];
+	memset(table, 0, sizeof(pmd_table));
+	return ((pmd_t *)table);
+}
+
+void free_kpointer_table (pmd_t *pmdp)
+{
+	pmd_table *table = (pmd_table *)pmdp;
+	pmd_tablepage *page = (pmd_tablepage *)((u_long)table & PAGE_MASK);
+	long nr;
+	short i;
+
+	for (i=KPTR_PAGES-1; i>=0; i--) {
+		if (kptr_pages.page[i] == page)
+			break;
+	}
+	nr = ((u_long)table - (u_long)page) / sizeof(pmd_table);
+	if (!table || i < 0 || (i == KPTR_PAGES-1 && nr < RESERVED_KPTR)) {
+		printk("Attempt to free invalid kernel pointer table: %p\n", table);
+		return;
+	}
+	asm volatile("bfclr %0@{%1,#1}"
+		: /* no output */
+		: "a" (&kptr_pages.alloced[i]), "d" (nr));
+	if (!kptr_pages.alloced[i]) {
+		kptr_pages.page[i] = 0;
+		cache_page ((u_long)page);
+		free_page ((u_long)page);
+	}
+}
+
 static unsigned long transp_transl_matches( unsigned long regval,
 					    unsigned long vaddr )
 {
@@ -237,6 +308,7 @@
  */
 unsigned long mm_vtop (unsigned long vaddr)
 {
+#ifndef CONFIG_SINGLE_MEMORY_CHUNK
 	int i=0;
 	unsigned long voff = vaddr;
 	unsigned long offset = 0;
@@ -252,6 +324,10 @@
 			offset += m68k_memory[i].size;
 		i++;
 	}while (i < m68k_num_memory);
+#else
+	if (vaddr < m68k_memory[0].size)
+		return m68k_memory[0].addr + vaddr;
+#endif
 
 	return mm_vtop_fallback(vaddr);
 }
@@ -373,6 +449,7 @@
 #ifndef CONFIG_SINGLE_MEMORY_CHUNK
 unsigned long mm_ptov (unsigned long paddr)
 {
+#ifndef CONFIG_SINGLE_MEMORY_CHUNK
 	int i = 0;
 	unsigned long offset = 0;
 
@@ -389,6 +466,11 @@
 			offset += m68k_memory[i].size;
 		i++;
 	}while (i < m68k_num_memory);
+#else
+	unsigned long base = m68k_memory[0].addr;
+	if (paddr >= base && paddr < (base + m68k_memory[0].size))
+		return (paddr - base);
+#endif
 
 	/*
 	 * assume that the kernel virtual address is the same as the
@@ -478,7 +560,7 @@
  *	Jes was worried about performance (urhh ???) so its optional
  */
  
-void (*mach_l2_flush)(int) = NULL;
+extern void (*mach_l2_flush)(int) = NULL;
 #endif
  
 /*
--- linux-2.2.0pre7/drivers/video/atafb.c.rz	Sun Jan 31 15:21:53 1999
+++ linux-2.2.0pre7/drivers/video/atafb.c	Sun Jan 31 15:30:53 1999
@@ -2829,9 +2829,19 @@
 		/* Map the video memory (physical address given) to somewhere
 		 * in the kernel address space.
 		 */
+#if 1
+		external_addr = kernel_map(external_addr, external_len,
+					   IOMAP_WRITETHROUGH, NULL);
+#else
 		external_addr = ioremap_writethrough(external_addr, external_len);
+#endif
 		if (external_vgaiobase)
+#if 1
+			external_vgaiobase = kernel_map(external_vgaiobase,
+				0x10000, IOMAP_NOCACHE_SER, NULL);
+#else
 			external_vgaiobase = ioremap(external_vgaiobase, 0x10000 );
+#endif
 		screen_base      =
 		real_screen_base = external_addr;
 		screen_len       = external_len & PAGE_MASK;
--- linux-2.2.0pre7/include/asm-m68k/bootinfo.h.rz	Sun Jan 31 14:26:10 1999
+++ linux-2.2.0pre7/include/asm-m68k/bootinfo.h	Sun Jan 31 14:26:42 1999
@@ -46,6 +46,12 @@
     unsigned long data[0];		/* data */
 };
 
+#else /* __ASSEMBLY__ */
+
+BIR_tag		= 0
+BIR_size	= BIR_tag+2
+BIR_data	= BIR_size+2
+
 #endif /* __ASSEMBLY__ */
 
 
@@ -281,6 +287,14 @@
 	unsigned long adbdelay;
 	unsigned long timedbra;
 };
+#else
+
+#define BI_videoaddr	BI_un
+#define BI_videorow	BI_videoaddr+4
+#define BI_videodepth	BI_videorow+4
+#define BI_dimensions	BI_videodepth+4
+#define BI_args		BI_dimensions+4
+#define BI_cpuid	BI_args+56
 
 #endif
 
--- linux-2.2.0pre7/include/asm-m68k/pgtable.h.rz	Sun Jan 31 14:28:29 1999
+++ linux-2.2.0pre7/include/asm-m68k/pgtable.h	Sun Jan 31 14:28:38 1999
@@ -13,7 +13,14 @@
  * the m68k page table tree.
  */
 
-#include <asm/virtconvert.h>
+/* For virtual address to physical address conversion */
+extern unsigned long mm_vtop(unsigned long addr) __attribute__ ((const));
+extern unsigned long mm_ptov(unsigned long addr) __attribute__ ((const));
+
+#include<asm/virtconvert.h>
+
+#define VTOP(addr)  (mm_vtop((unsigned long)(addr)))
+#define PTOV(addr)  (mm_ptov((unsigned long)(addr)))
 
 /*
  * Cache handling functions
@@ -429,24 +436,34 @@
 extern inline void pmd_set(pmd_t * pmdp, pte_t * ptep)
 {
 	int i;
-	unsigned long ptbl;
-	ptbl = virt_to_phys(ptep);
-	for (i = 0; i < 16; i++, ptbl += sizeof(pte_table)/16)
-		pmdp->pmd[i] = _PAGE_TABLE | _PAGE_ACCESSED | ptbl;
+
+	ptep = (pte_t *) virt_to_phys(ptep);
+	for (i = 0; i < 16; i++, ptep += PTRS_PER_PTE/16)
+		pmdp->pmd[i] = _PAGE_TABLE | _PAGE_ACCESSED | (unsigned long)ptep;
+}
+
+/* early termination version of the above */
+extern inline void pmd_set_et(pmd_t * pmdp, pte_t * ptep)
+{
+	int i;
+
+	ptep = (pte_t *) virt_to_phys(ptep);
+	for (i = 0; i < 16; i++, ptep += PTRS_PER_PTE/16)
+		pmdp->pmd[i] = _PAGE_PRESENT | _PAGE_ACCESSED | (unsigned long)ptep;
 }
 
 extern inline void pgd_set(pgd_t * pgdp, pmd_t * pmdp)
 { pgd_val(*pgdp) = _PAGE_TABLE | _PAGE_ACCESSED | virt_to_phys(pmdp); }
 
 extern inline unsigned long pte_page(pte_t pte)
-{ return (unsigned long)phys_to_virt(pte_val(pte) & PAGE_MASK); }
+{ return (unsigned long)phys_to_virt((unsigned long)(pte_val(pte) & PAGE_MASK)); }
 
 extern inline unsigned long pmd_page2(pmd_t *pmd)
-{ return (unsigned long)phys_to_virt(pmd_val(*pmd) & _TABLE_MASK); }
+{ return (unsigned long)phys_to_virt((unsigned long)(pmd_val(*pmd) & _TABLE_MASK)); }
 #define pmd_page(pmd) pmd_page2(&(pmd))
 
 extern inline unsigned long pgd_page(pgd_t pgd)
-{ return (unsigned long)phys_to_virt(pgd_val(pgd) & _TABLE_MASK); }
+{ return (unsigned long)phys_to_virt((unsigned long)(pgd_val(pgd) & _TABLE_MASK)); }
 
 extern inline int pte_none(pte_t pte)		{ return !pte_val(pte); }
 extern inline int pte_present(pte_t pte)	{ return pte_val(pte) & (_PAGE_PRESENT | _PAGE_FAKE_SUPER); }
@@ -530,7 +547,7 @@
 	return mm->pgd + (address >> PGDIR_SHIFT);
 }
 
-#define swapper_pg_dir kernel_pg_dir
+extern pgd_t swapper_pg_dir[128];
 extern pgd_t kernel_pg_dir[128];
 
 extern inline pgd_t * pgd_offset_k(unsigned long address)
@@ -608,6 +625,8 @@
 
 extern pmd_t *get_pointer_table(void);
 extern int free_pointer_table(pmd_t *);
+extern pmd_t *get_kpointer_table(void);
+extern void free_kpointer_table(pmd_t *);
 
 extern __inline__ pte_t *get_pte_fast(void)
 {
@@ -735,12 +754,29 @@
 
 extern inline void pmd_free_kernel(pmd_t * pmd)
 {
-	free_pmd_fast(pmd);
+	free_kpointer_table(pmd);
 }
 
 extern inline pmd_t * pmd_alloc_kernel(pgd_t * pgd, unsigned long address)
 {
-	return pmd_alloc(pgd, address);
+	address = (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
+	if (pgd_none(*pgd)) {
+		pmd_t *page = get_kpointer_table();
+		if (pgd_none(*pgd)) {
+			if (page) {
+				pgd_set(pgd, page);
+				return page + address;
+			}
+			pgd_set(pgd, (pmd_t *)BAD_PAGETABLE);
+			return NULL;
+		}
+		free_kpointer_table(page);
+	}
+	if (pgd_bad(*pgd)) {
+		__bad_pmd(pgd);
+		return NULL;
+	}
+	return (pmd_t *) pgd_page(*pgd) + address;
 }
 
 extern inline void pgd_free(pgd_t * pgd)
@@ -779,7 +815,26 @@
 int mm_end_of_chunk (unsigned long addr, int len);
 #endif
 
-extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode);
+/*
+ * Map some physical address range into the kernel address space.
+ */
+extern unsigned long kernel_map(unsigned long paddr, unsigned long size,
+				int nocacheflag, unsigned long *memavailp );
+/*
+ * Unmap a region alloced by kernel_map().
+ */
+extern void kernel_unmap( unsigned long addr );
+/*
+ * Change the cache mode of some kernel address range.
+ */
+extern void kernel_set_cachemode( unsigned long address, unsigned long size,
+				  unsigned cmode );
+
+/* Values for nocacheflag and cmode */
+#define	KERNELMAP_FULL_CACHING		0
+#define	KERNELMAP_NOCACHE_SER		1
+#define	KERNELMAP_NOCACHE_NONSER	2
+#define	KERNELMAP_NO_COPYBACK		3
 
 /*
  * The m68k doesn't have any external MMU info: the kernel page
--- linux-2.2.0pre7/include/video/font.h.rz	Sun Jan 31 15:03:00 1999
+++ linux-2.2.0pre7/include/video/font.h	Sun Jan 31 15:03:19 1999
@@ -11,6 +11,19 @@
 #ifndef _VIDEO_FONT_H
 #define _VIDEO_FONT_H
 
+#ifdef __ASSEMBLY__
+
+#ifdef __mc68000__
+#define FBCON_FONT_DESC_idx	0
+#define FBCON_FONT_DESC_name	(FBCON_FONT_DESC_idx   +4)
+#define FBCON_FONT_DESC_width	(FBCON_FONT_DESC_name  +4)
+#define FBCON_FONT_DESC_height	(FBCON_FONT_DESC_width +4)
+#define FBCON_FONT_DESC_data	(FBCON_FONT_DESC_height+4)
+#define FBCON_FONT_DESC_pref	(FBCON_FONT_DESC_data  +4)
+#endif
+
+#else /* __ASSEMBLY__ */
+
 #include <linux/types.h>
 
 struct fbcon_font_desc {
@@ -47,5 +60,7 @@
 
 /* Max. length for the name of a predefined font */
 #define MAX_FONT_NAME	32
+
+#endif /* __ASSEMBLY__ */
 
 #endif /* _VIDEO_FONT_H */

