bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: console plans


From: Marcus Brinkmann
Subject: Re: console plans
Date: Thu, 14 Feb 2002 21:14:05 +0100
User-agent: Mutt/1.3.27i

Hi,

thanks for your reply, it is very useful.  I am on the jump between two
conferences, so I can not do the research right now, but some comments
I can make right now.

On Wed, Feb 13, 2002 at 04:36:31AM -0500, Roland McGrath wrote:
> > * A kernel driver to get the keyboard events from.  To reduce kernel code,
> >   raw scan code events will be consumed in the ix86 version.
> >   Status: GNU Mach can provide them, I have a hack for OSKit-Mach here that
> >   needs to be polished and integrated into the source (a cheesy kbd device
> >   that sets keyboard in raw mode when being opened).
> 
> Can you show me your hack?

Sure, as soon as I wiped the dust off (it's on my laptop, so I don't have it
available right now).  It's really just another oskit-mach pseudo device that
sets raw mode on first open and normal mode on last close (that's at least
the indication, the closing part doesn't seem to work out right yet :)

> The path of righteousness is to add proper ps2
> (and usb, for that matter) drivers to the oskit.  I would do that right
> quick if I had an x86 test machine.  

I looked into oskit a bit (playing with ideas like pcmcia, character device
and sound support), but it's a huge framework and I couldn't get my feet in
quickly enough to get hooked up.  We will have to do something about your
hardware situation, though :)

> >   I don't remember if it works or if there are bugs, if there are bugs in
> >   the mem device in OSKit-Mach I have fixed them locally and can submit a
> >   fix (in other words: works for me).
> 
> I think you found some mem bugs and I integrated fixes for them already.
> Please check if you have anything else I should put in.

Ah, yes, I think I remember now, that might have very well been all about it.
As I will actually do a whole diff of my experimental tree with the CVS
version, I will certainly notice any pending bits and bytes.

> > The console server emits characters in the local encoding
> 
> That seems like the right model.  One could perhaps simplify by always
> going with UTF-8

The console seems to be the natural place where you configure the system
wide default encoding.  I am not sure how an alternative would look like.
Where would the actual conversion took place if we were to use UTF-8
unconditionally?

> or something, but it makes sense to me that the
> term<->console io should look like what term<->serial io would look like if
> you had a local-language ASCII terminal attached to the serial port (some
> of us still have actual terminals, though I left my last one behind).  (But
> maybe your implementation for PC scan codes should go to Unicode internally
> and use libc's iconv code to produce the local format.)

I suggested using some standard encoding, and the only reason I considered
ASE at all is that this is the standard encoding for the BDF fonts, so
having some way to get to ASE is surely useful.  This can also be a
Unicode - ASE translation table, if we were to use Unicode throughout,
at least internally, which is fine by. 

BDF only allows for "one" internal encoding (if the encoding is -1, you
can use the internal encoding value) for a font file.  However, I looked at
the ASE encoding now (expecting something big like Unicode), it's just a
disappointing 7-bit compatible table with ~220 entries.  So using ASE as
_the_ standard encoding was a silly idea
(http://partners.adobe.com/asn/developer/type/stdenc.html).

Esp with the input below, I feel much better about Unicode now (although
I would not like to support the whole lot of it right from the
start, esp the compose characters A + ["] = Ä and stuff like that).

About iconv, man, if I only had more connections in my brain!  I don't know
how many times I wondered what exactly this iconv stuff is, being only dimly
aware that it has something to do with character conversion.  That it is
exactly one of the missing parts in the console server only occured to me
now that you mentioned it!  I will study this in more detail.

> I'm not sure what you have in mind when you say "Unicode support is not
> feasible".  As far as I am aware, all of the POSIX.1 io and terminal
> interfaces are described in terms of byte-oriented io.  So to meaningfully
> do io through term, it has to be delivering a byte-oriented encoding such
> as UTF-8.

I thought there might be problems because the UTF-8 encoding is multi-byte. 
I don't see a problem with transfering UTF-8 data through our system (that's
what UTF-8 is about), but I thought term and maybe other things need to be
UTF-8 aware to get the character boundary right.  Is the fact that UTF-8 is
self-synchronizing enough to ensure proper behaviour?  Obviously all the
control characters and such are in the compatible 7-bit ASCII range.  And
now that I look at the Unicode standard (ok, I am lazy, I am looking at the
examples in chapter two), all charcaters encoded as multiple bytes have the
high bit set in all bytes!  This is truly wonderful!  It really seems to be
workable.

Sometimes I just need a small encouragement to jump over the shadow of
doubt. ;)

> > The only input driver that will be initially supported is of course the PC
> > keyboard driver, which translates PC keyboard raw scan codes into
> > characters.  This will require a configuration file with a translation
> > table.
> 
> I don't have much opinion about the details of this, except that of course
> we should have at least every feature people like from every other system.
> My main thinking is don't write a new one if you can possibly avoid it.  If
> not implementation code, there has got to be at least a file format and
> tool interface that we can grab from somewhere else and be compatible with.
> What about xkb?

Good suggestion I will look into it.  I mainly looked at the system consoles
so far, and was hooked up with the font stuff, so the keyboard stuff did not
get appropriate consideration yet.  X sounds promising, while I am tending
away from things like the Linux console data (many reasons, similar to those
for the font stuff).  But at least I want to provide import compatibility
(that you can easily convert or directly use the existing stuff) with as
many formats as desireable.  Our own stuff I want to keep as clean as
possible, though (and hell, maybe I will like xkb, it's not as I wouldn't
hope for it).

It's an interesting observation though how this stuff is re-invented with
every system, and kept seperately.

> I would like to get opinions on how this stuff ought to be from people
> working on i18n for GNU as well.

Very good idea.  I will try to get hold of them.

> We should definitely support hardware-related configuration as an
> independent axis from desired local encoding.  

Ok.  You will then be able to map keys to either unicode characters (also
strings of it), special control commands, and to raw byte sequences (which
are not interpreted in the console driver, like the unicode characters are).
Hell, when we are at it we might just as well add some way to specify the
encoding specifically, and let iconv do the work.

> > The output half is similar.
> 
> Again, I don't care much about the details (I'll admit it, I just use X :-)
> but my strong reaction is not to do a new one.  Obviously you won't do a
> new font format, since there are existing fonts to use in existing formats.
> But also for the tools interface and any configuration files and so forth,
> compatibility with something that exists and people use is always a good
> thing.  (People like it, there are configuration front-ends around, people
> have their own scripts they can still use, etc.)

Again, I would like to preserve input compatibility with many.  For example,
I planned in from the beginning the possibility to load psf fonts.  I think
all the conversion algorithms are so small that we can include all of them
in the binary and do it dynamically, where automatic processing is possible.

However, I also want to try to learn from the experiences.  As it turns out,
interfaces like this hang around for a long time, and making a bad decision
initially makes you look bad five years later.

And I want to make it easier for the user.  Currently, you not only have to
know what keyboard you have, but also which system encoding you want to map
it to (your choice is limited by the available config files), choose the
right font for the encoding (one font, one encoding), and in some cases also
load the right screen map for the font (which maps encoding to font).  I
think using UTF-8 throughout and a sane font format (which requires an
encoding) makes it possible to reduce all this to the simple question of
which keyboard you have, and maybe which system encoding you want to use
(if it isn't UTF-8 in the first place).  I imagine that a french user with an
american keyboard could easily make these choices, have one of these dead
"windows keys" as a compose key by default and type away.

(And if you have ever used a french keyboard, I am sure the first thing you
will want to do is to find all those keys in their funny places to type
"fsysopts /dev/hwcon --kbd pckbd:us" and shut your eyes to type blindly)

> What ever happened about GGI?  It appears that project is dead.  Oh well.

Ah, another good pointer.

> > the dimensions need to be reported back to the user (SIGWINCH anyone?  I
> > think we are missing support for that in the Hurd, and in term in
> > particular).
> 
> It's all there.

Good to know.  I was grepping for the wrong stuff in the wrong places,
apparently ;)  I will check it out.

> > Which leads me to the next configuration file needed, the mapping of
> > characters (ASCII, iso8859-1 or whatever) to the encoding used in the BDF
> > font (usually Adobe Standard Encoding, about which I have to get more info,
> > but it can be an internal encoding, too).
> 
> This has got to already exist in some form for X to use.

Right, one more thing on the list.  I am just glad I pointed these notes
early on, there is a lot of research I still have in front of me (which is a
good thing).
 
> > (An implementation detail: If you preload a font to the vga buffer, you
> > can hard code an encoding, so it might be better to leave the decoding to
> > the backend driver, inc ase it can shortcut it).
> 
> Ah, interesting point.  That seems worthwhile to avoid another conversion
> layer in the io path.  

THAT makes me thinking!  I am running out of time, but I just had this hell
of an idea:  If we would use UTF-8 throughout, we would end up with the vga
backend receiving UTF-8 characters.  Now it can do one of two things: The
boring way to handle this is to require a list of 256 characters that it
should be able to display (for the lazy, this would just be one of the
standard encodings like latin1).  The interesting way is to prescan as much
characters as available, and, taking all the characters into account that
are visible on the screen at that moment, it could try to find "empty slots"
in the loaded font that it could load with the glyphs for the characters
that are to be displayed.  In other words, the 256 glyphs vga font would be
used as a cache for a full blown unicode console font.  You could only
display as much distinct characters at a time, and in some cases it would be
performing badly, but in most usage patterns it should work very well,
certainly good enough for a two-language setup like german-greek, or so.
Most of the font slots are barely used (if you doubt that, cat a binary
file for a change) in the western world.

Call me crazy, but I really like my text mode.  It would not be too hard
to implement.  And I think it would be plain cool ;)

> > The generic code will deal with virtual terminals, generic screen savers,
> > and similar stuff (like displaying a status bar or a clock).  That is the
> > place where you put all the gimmicks.
> 
> A built-in VNC server would be nice (and of course it should be possible to
> run it so there is no real device, just VNC).  Hmm, perhaps it should in
> general be possible to use multiple output backends, e.g. duplicating
> output to two different video adapters for two displays.  (You could even
> add adjacent multi-head support as well as just duplicate head
> support--just give each backend a clipping region.  Another idea is
> multi-head configurations where different heads, or groups of adjacent
> heads, display different virtual consoles.)

This is all very interesting stuff, but I think it is all stuff for other
backend drivers.  Some code might be shared (vga code etc), and I will
keep this in seperate files.  Likewise for svga support, and framebuffer
terminals etc etc.

> What about DPMS?  If generic screen-saver code decides it's time to blank,
> it needs to tell the hardware-specific backend, which for VGA sends some
> magic byte to an io port.

Currently, my standard answer to that is that we don't have APM support in
the Hurd, which of course means that I have no clue how APM works.  If it is
only about doing some magic to an io port, I can add it easily (if I can
find the docs on it, I will check freevga).  Of course, I have also one or
two generic gimmick screen-savers in mind which actually don't save screens.
(It seems to be inavoidable.  People just love that stuff, and frankly, I
have troubles to resist myself.  I have hard times to keep myself from
adding easter eggs to it that are triggered if you press some sequence of
keys.  It's just too tempting.)

> > What I am in particular not sure is how we would get a getty on such
> > virtual consoles (surely we would use some /etc/ttys entry, maybe some
> > support needs to be added there).
> 
> Nothing special happens on GNU/Linux or BSD.  It's just standard to list
> several virtual terminals in ttys or inittab (/dev/tty[1-8] on your
> GNU/Linux system).  But it would make sense to have some kind of feature
> where pressing some key would create a new virtual terminal and run some
> command on it, and that could be configured to give you gettys on demand.

Sure enough.  On Linux, it is Meta+UP (but you have to configure it in
inittab).  That should be one of the special commands in the table.

> > I have some ideas I did not mention here, and I will see how many of
> > these I get done in time.  I am sure I will not be able to implement all
> > your ideas, but I would not like to miss a particular good one either.
> 
> If the code is structured well, then we always win by starting small and
> adding bells and whistles later.

It's one of the top priorities to seperate it well and have reasonable
internal interfaces to build on (and it's the main reason why some things in
the existing colortext have to be arranged differently).

Thanks,
Marcus

-- 
`Rhubarb is no Egyptian god.' Debian http://www.debian.org brinkmd@debian.org
Marcus Brinkmann              GNU    http://www.gnu.org    marcus@gnu.org
Marcus.Brinkmann@ruhr-uni-bochum.de
http://www.marcus-brinkmann.de



reply via email to

[Prev in Thread] Current Thread [Next in Thread]