[rfc][icedtea-web] logging to file before file is prepared patch

Jiri Vanek jvanek at redhat.com
Fri Oct 2 12:07:47 UTC 2015


On 10/01/2015 08:39 PM, Jacob Wisor wrote:
> On 01.10.2015 at 01:40 PM Jiri Vanek wrote:
>> On 10/01/2015 02:27 AM, Jacob Wisor wrote:
>>> On 09/29/2015 at 07:22 PM Jiri Vanek wrote:
>>>> On 09/29/2015 06:24 PM, Jacob Wisor wrote:
>>>>> On 09/29/2015 at 01:45 PM Jiri Vanek wrote:
>>>>>> Hello!
>>>>>>
>>>>>> I have been noticed about segfault when both debuging anf filelogging is on.
>>>>>>
>>>>>> Issue is caused by printing of information into file before the file is
>>>>>> actually
>>>>>> prepared.
>>>>>>
>>>>>> The issue is presented since 1.4 but only 1.6 and head are affected - probably
>>>>>> because thewy are logging somehting more. I would like to push it to 1.5
>>>>>> and up.
>>>>>>
>>>>>> Attached patch is disabling printing to file before the file is actually
>>>>>> prepared.
>>>>>
>>>>> I do not think this is the proper way to do it. You did not even bother to get
>>>>> to the root of the
>>>>> cause.
>>>>>
>>>>>> diff -r b02ae452f99f plugin/icedteanp/IcedTeaPluginUtils.h
>>>>>> --- a/plugin/icedteanp/IcedTeaPluginUtils.h    Thu Sep 24 16:32:30 2015 +0200
>>>>>> +++ b/plugin/icedteanp/IcedTeaPluginUtils.h    Tue Sep 29 13:15:10 2015 +0200
>>>>>> @@ -86,6 +86,7 @@
>>>>>>        plugin_debug_to_console =
>>>>>> is_java_console_enabled();                    \
>>>>>>        if (plugin_debug_to_file)
>>>>>> {                                             \
>>>>>>
>>>>>> IcedTeaPluginUtilities::initFileLog();                             \
>>>>>> +           file_logs_initiated =
>>>>>> true;                                        \
>>>>>>
>>>>>> }                                                                       \
>>>>>
>>>>> The problem is that for any reason opening the log file may fail at any time,
>>>>> also writing to it may
>>>>> fail at any time. So first, you do not know the state of the log file after
>>>>> IcedTeaPluginUtilities::initFileLog() has returned. You just do not. You are
>>>>> just assuming that
>>>>> initialization of the log file _always_ works.
>>>>> Second, it is better to check plugin_file_log for NULL before writing to it.
>>>>> So, things would rather
>>>>> look like this:
>>>>>>     if (plugin_debug_to_file && plugin_file_log) {        \
>>>>> And third, you should better check the return values of all those subsequent
>>>>> printfs because writing
>>>>> to files can fail at any time. Well, you /can/ ignore them but then you could
>>>>> also stop bothering
>>>>> calling any subsequent functions with this file after an error has occurred
>>>>> (because you know they
>>>>> are probably going to fail too).
>>>>
>>>> I'm aware of above behaviour. But what do yo suggest as action after those
>>>> checks?
>>>>
>>>> If it fails during initialisation, it deserves to fail. And crash report will
>>>> lead to discovering of what was wrong.
>>>>
>>>> And same - when writing fails. What to do? IMHO keep going and hoping for best
>>>> is best to do.
>>>
>>> In my experience and with systems growing ever more complex, it is best to
>>> notify the user as soon
>>> as possible of an error. If everything else fails, writing to stderr is better
>>> than nothing. It just
>>> makes life and error analysis a lot easier, especially for admins.
>>
>> So You actually wonts me to do during each file log operation
>> int i = iocall(...)
>> if (i!=0) {serr("call to iocall ened incorrectly with " + i);
>
> Yep, sort of. At least for real file IO. There is no need for stdout/err because there is really
> nothing one can do if those fail. You can wrap the error checking into a macro to make it more
> readable or if you feel it would be too much typing.

Actually.. Dont you wont to do this? You seems to know much more then myself in this area...
Or at elast pinpoint the places where you won to add it?

One nit - When some io call fails, and one will print out "cal XY returnd Z. Tahst bad" Where to log it?
In ideal world it will be logged via the error macro.But if eg filelog or pipe or syslog were the 
cause, then the error itself will cause inifinite loop. So whats the idea?
Calling the macro with special "no io checks now" flag? (then it will end in stdout/err at least, 
and even better, it may end in one of another three...
>
>> In case of file logging only, it would be acceptable, about 20 lines. For other
>> io calls, I would like to avoid this.
>
> Yep, error checking when writing to files should be enough.

same?
>
>>> Besides, I have noticed that IcedTea-Web does not log into syslog or (today)
>>> journald. It kind of
>>
>> It have :) And is on by default.
>>
>> grep -r -e deployment.log.system ~/hg/icedtea-web ...
>> http://icedtea.classpath.org/hg/icedtea-web/rev/2dfc5a2fcbe8?revcount=20 - but
>> it was adapted several times.
>>
>>    On java side it will get  you to
>> net.sourceforge.jnlp.util.logging.UnixSystemLog where is call to system command
>> logger
>> There is also WinSystemLog, but its log method do nothing.
>>    On C side it will get you to
>> openlog and syslog api calls.
>> As whole c part is platform dependent then there is not more to do
>>
>> Only most terrible killing exceptions (ERROR_ALL) goes to system logs.
>
> I do not think it is a good idea to be arbitrarily selective here. What are fatal errors? A system
> log is just another output (or channel if you want), not much different than stdout, stderr, or a
> plain text file. The only difference is that it is a formatted, aggregated, and structured data set.
> Besides, syslog has had a severity feature ever since which effectively enables admins to setup the
> amount of messages to log, which in turn translates into disk space. So, the same messages should go
> into syslog like into any other logging output.

see lower please.
>
>>> lacks this feature or maybe default behavior. I know, there has been
>>> discussion about this on the
>>> mailing list previously, and as far as I recall, it run more less into a dead
>>> end because system
>>> logs are utterly platform and operating system depended. But, I think
>>> IcedTea-Web can or should
>>> offer this feature, at least for syslog or journald in the Linux world.
>>> Implemented as a weak
>>> dependency, not as a hard dependency. So that BSD guys and other fundamental
>>> opposers of systemd get
>>> their break too. ;-)
>>> But seriously, although Java lacks a sane system logging framework (the
>>> current implementation is
>>> just a joke), IcedTea-Web can implement this in its plug-in, which is
>>> basically a native shared
>>> library that can accomplish just that. All it needs to do is just write to the
>>> log, so we do not
>>> need implement all of the logging functionality, like log4j does. We also
>>> avoid introducing a new
>>> build and deployment dependency. This way, messages from the Java and native
>>> world get logged. I
>>> have already implemented this into my Windows port.
>>
>> I believe ITW logging is finished and in good shape.
>
> I beg to differ. ;-)

:) expectedly, justifiably, and respectably   :)
>
>> Both java and c have file log (each side its own) and both are logging to its
>> separate file. By default off.
>> Both java and  c have shared stdou/err logging.
>
> Does logging into separate files make sense? Well maybe, and yet stdout/err do get all messages

When I revisit logging, I found  that file logging was done just to follow oracle impl. In same time 
I recognize dthat ITW debug console is just terrible.
So I remake console and  hallowed it as allmighty. And then I was agian thinking about what the file 
log is for and found that when logging to std outs, you have choices to log err and std to separate 
channels.
In console, you have all together and you may filter.
So why not to separate file? As you only seldom hunt bug in btih native and java. Its mostly only 
one of them. So it is usefull.

Also, when you wont java to log to same file asa plugin does, you need to tell java what file it is. 
And thas another logic which needs to be done and taken care about.

> (from both worlds). Why set different standards for different outputs (channels)? Again, syslog
> should get really everything, it has been designed to do this.
>
>> Verbosity of above two logs is affected by debug switches.
>>
>> Both java and c have system loggin. Only most terrible killing exceptions
>> (ERROR_ALL) goes to system logs and itis not affected by debug switches.
>>
>> As best candy there is debuging console.  It have always all messages,not
>> depnding on gobal verbosity.
>
> Subjectively, a debugging console may seem like a great idea but objectively, this is not where most
> admins will be looking first. They will be looking at stdout/err and syslogs first. They do not need
> a dedicated debugging console in an application, nobody really does, yes even application developers
> do not need it. With the advent of GUI stdout/err has become more or less obsolete, or perhaps a
> little bit difficult to come by. This is where syslog steps in; to collect error and status data of
> applications which would otherwise (like stdout/err) not be visible.

javaws/plugin are deplyment environment. SO it is nice to have tool wheer you can see what your 
carefully developed application is actually doing iin productive environment. Tahs how this console 
was born in dark past I guess.

In itw it have on emore functionality. When ITW crashes, only final chain of exceptions is shown. 
You can mostly read why itw failed (and thas why it goes to syslog) but for understending the 
problem or reporting the bug its not enough.
So there is button "show console" in error dialogue, whci when clicked, shows console :) Here yo can 
read a lot why oit crashed, you can search and filter and - because it is always full debug - you 
can copypaste it and report. Such an log is very mostly enough to resovle issue.
So I really like this tool.

>
>> before java part starts, the plugin is saving its messages to buffer. Once java
>> starts and logging is avaiable, it flusehs all pre-start messages and then
>> continue piping other logs.
>
> Again, this is utterly selective and these so called "pre-start" message may be key in some
> situations. If launching the JVM or anything before fails all these messages get lost.

Nope - they are not lost. They ends in stdout or file if enabled.
It jsut nice candy that they are going to javaconsole too.
>
>> java part itself have dialog, by default on but hdden to show the mesages in
>> rich format and with searching and filtering abilities. And imho it is working
>> really fine.
>
> It's nice but no one really needs it if you have syslog support - and grep with plain text files -
> properly implemented. journalctl has great filtering capabilities. ;-)

as explained above.

(well thunderbird crashed osmewhere in middle of message, so I hope I did not forget anything when 
writing it second times:( )
>
>>>> Generally - I would like to have as few logic as possible here. as I do not see
>>>> much effective options what to do in corner cases. Patches welcome.
>>>>>
>>>>> Oh, and limiting a log message size to MESSAGE_SIZE is just wired, absolutely
>>>>> arbitrary, and not
>>>>> really necessary. Furthermore, combining the same message with snprintf
>>>>> multiple times for different
>>>>> output targets is also a waste of computing power.
>>>>
>>>> I rememberer I had both unlimited size and variable with resulted str. But I
>>>> also remember I had quite a lot of strange issues with it. Its two years ago, I
>>>> really do not remember, But I would like to avoid failing into the issues again.
>
> What issues? The only issue I can think of is that syslog may have a limit on message sizes,
> stdout/err and plain text files surely do not.
>
>>>>>
>>>>> Check this out:
>>>>> IcedTeaNPPlugin.h:
>>>>>> extern FILE * plugin_file_log;
>>>>>
>>>>> IcedTeaPluginUtils.cc:
>>>>>> void IcedTeaPluginUtilities::initFileLog(){
>>>>>>     if (plugin_file_log != NULL ) {
>>>>>>         //reusing
>>>>>>         return;
>>>>>>     }
>>>>>
>>>>> The initialization of the log file is definitely wrong here: plugin_file_log
>>>>> is going to be of any
>>>>> possible value after loading the shared object into memory. It must have been
>>>>> just pure luck that
>>>>> plugin_file_log has been initialized to NULL after loading for any time
>>>>> logging to file from the
>>>>> shared object has ever worked. If you want to make sure the log file does not
>>>>> get reinitialized
>>>>> after it has been initialized then plugin_file_log must be explicitly
>>>>> initialized to NULL before
>>>>> making any assumptions whether to initialize the log file.
>>>>
>>>> Ok:
>>>>
>>>> - FILE * plugin_file_log;
>>>> + FILE * plugin_file_log = NULL;
>>>
>>> I think this is it. I have also looked closer into
>>> IcedTeaPluginUtilities::initFileLog(), it is a
>>> little bit oddly written but AFAICT correct. Having said that, you do not need
>>> to introduce
>>> file_logs_initiated and test for it, nor test for plugin_file_log in the
>>> PLUGIN_ERROR() macro. If
>>> plugin_file_log gets initialized to an arbitrary value at load time then
>>> logging into a file does
>>> not get initialized, and plugin_debug_to_file will most probably get
>>> initialized to an arbitrary
>>> value too, so both will evaluate to true, hence fprintf attempts to write to
>>> an uninitialized file
>>> pointer, effectively resulting in a SIGSEV.
>>
>> So I can proceed with original patch (+ above -FILE +FILe)? As sigsev was what
>> it was about to fix :)
>
> Yep, I think so.
>

Thank you very much for review!


J.



More information about the distro-pkg-dev mailing list