Patch Ttsystem 9 06114

Patch Ttsystem 9 06114 Rating: 8,0/10 3982votes
Patch Ttsystem 9 06114Patch Ttsystem 9 06114

When trying to create the binary file using mb-objcopy, using the process described in the following link I end up with a huge binary file using SDK 13.2 Before I read the above link, huge files was being created for every application, including the simple Hello World app. The above method fixed this but it doesn't work for all. I know this is caused by the gaps in our system memory map, but there should be some way to fix it. # For removing the gaps filled by zeros from the bin filemb-objcopy -O binary -j.vectors.reset -j.vectors.sw_exception -j.vectors.interrupt -j.vectors.hw_exception test.elf./app1.bin mb-objcopy -O binary -R.vectors.reset -R.vectors.sw_exception -R.vectors.interrupt -R.vectors.hw_exception test.elf./app2.bin The second command results in the massive binary. Bytes Anybody got any futher info which will resolve this. When trying to create the binary file using mb-objcopy, using the process described in the following link I end up with a huge binary file using SDK 13.2 Before I read the above link, huge files was being created for every application, including the simple Hello World app. The above method fixed this but it doesn't work for all.

Image Name: Linux-3.0.35-06114-g3b96084. Created: Sat Dec 14 11. There were several patches made in WLP 9.x to fix CM caching issues. It's possible the issue you are describing has already. Can i generate MIB file from SCOM. I need to push MIB file generated by SCOM 2007 R2 to TT system.

I know this is caused by the gaps in our system memory map, but there should be some way to fix it. # For removing the gaps filled by zeros from the bin filemb-objcopy -O binary -j.vectors.reset -j.vectors.sw_exception -j.vectors.interrupt -j.vectors.hw_exception test.elf./app1.bin mb-objcopy -O binary -R.vectors.reset -R.vectors.sw_exception -R.vectors.interrupt -R.vectors.hw_exception test.elf./app2.bin The second command results in the massive binary. Bytes Anybody got any futher info which will resolve this. Hi, we use the Mentor CodeSourcery Sourcery_CodeBench_Lite_for_ARM_EABI Compiler to develop an applications for the M4 core of Vybrid. Everything works fine with the debugger.

For a standalone operation we do now want to put the application just onto the SD card and let uboot start it. We have done this before. I need now to convert the ELF file into a.bin file, which uboot can load. I am trying to do this with arm-eabi-objcopy and also the arm-none-eabi-objcopy. I run the following steps in a batch file (therefore the%1): arm-none-eabi-objcopy --strip-debug -R.devdata%1%1.img arm-none-eabi-objcopy -O srec%1.img%1.srec arm-none-eabi-objcopy -O binary%1.img%1.bin all files look reasonable, however the created.bin file is huge! I have also tried to do this directly from the elf file with -S -O. All attempts lead to some huge file.

I assume this has to do with the fact, that the project uses small sections of memory accross all the RAMs on vybrid and that the objdump seems to create a contiguos memory file? What is the solution? Problem: When issuing WriteFile() in a loop, performance suddenly, at some sharp point (often around 10 - 50 GB) begins to decrease linearly towards zero. At the same point CPU usage increases suddenly from ~20% to ~60%. The problem has only occured on fast RAID 0 drives, both hardware RAID and software RAID, where performance decreased from ~300 MB/s to less than 10 MB/s.

The problem does not occur on slower(80 MB/s) single drives. It has been verified that fragmentation or data going to inner (slower) cylinders is not the cause. Experiments with a test program that writes data to a file in a loop: 1) Enabling FILE_FLAG_SEQUENTIAL_SCAN did not help 2) Enable advanced performance in Disk Policies in the Device Manager did not help 3) Varying disk chunk size from 64 KB to 4096 did not help. Going above 256 KB seemed to decrease performance even more, and from the beginning of writing. 4) Executing close-file + open-file + seek-to-end end every 10 GB did not help. Also not when incuring a Sleep() for 200 seconds before reopen.

5) Terminating the test program, waiting for several minutes and restarting the program, making it append to the same file, did not help. It resumed being just as slow as when it terminated.

6) Terminating the test program, executing it on a different temporary file for some 4-5 GB, then letting it resume on the original file *HELPED*. Note that performance was OK on the temporary file too. 7) Using FILE_FLAG_NO_BUFFERING *HELPED* and kept CPU usage at just ~3% 8) Pre-allocating 1 GB space for each 1 GB written, using SetFilePointer + SetEndOfFile *HELPED*! Experiment 6, 7 and 8 helped. It seems like the problem is in the file level layer of Windows. Perhaps in some data structureswhen searching for free clusters?

Test program: and Usage: write destination file to create chunk size in KByte [B] B flag makes it use FILE_FLAG_NO_BUFFERING Hardware: Asus M2N mainboard with nForce 430 RAID controller. Athlon FX 6400+.

Various 2- and 4-disk configurations of RAID 0, using both nForce controller and Vistas own software RAID. Disks are various 320 and 500 GB 7200 RPM SATA disks. Software: Windows Vista Ultimate, US, build 6001, SP1. First trying Windows' own RAID drivers for thiscontroller, then downloading nVidias latest drivers and software. Stripe size of 64 K and 128 K have been tested, always using 64 K sector size on NTFS.

Very interesting Yes, Windows has had some issues and some are fixed in Windows 7. Both caching and the file system code are involved and both have changed I am not a Microsoft employee - was in the past - but I will do what I can to make sure the appropriate people pay attention to the valuable data people are posting in this thread Caching for sure has issues Doing the xcopy in a pull and push fashion give you completely different results The Microsoft perf team has a blog where they find eseutil is the best way to copy files. Eseutil opens files in a non cached manner Dilipwww.msftmvp.com and VHD tools at www.VMUtil.com. All clogMessageGenerated notifications from devices should be properly decoded by the management station.

The entire message is encapsulated. Whether or not processing messages using the syslog-to-trap encapsulation is more useful that the syslog itself depends on your management station. I happen to think RME (4.x in any event) does a good job with processing syslog messages and turning them into external actions. Since the trap breaks out facility and severity into separate varbinds, the trap manager also has a good chance of doing useful thinks with the syslog notifications, but this may require more scripting work on your end. I'm trying to complile the MIB for the PowerConnect 3348 and keep getting an error. I'm trying to complile the MIB for the PowerConnect 3348 and keep getting an error.

Understanding the Basics of Digital Memory. What does it mean to have 100MBor even 32GB of storage? First, a byte is series of zeros and ones. For example; letter 'A' is represented by the set of bits or 1s and 0s; A=01000001. A kilobyte(kb) is 1000 of these Bytes. 1000 kb is a megabyte(mb). 1000 mbs is a gigabyte (gb).

Still not making much sense? Let's Compare digital memory to a book. If bytes were individual letters in a book. It would take a kilobyte of them to fill a page. A megabyte is then a 1000 page Book and a gigabyte would be a 1000 book library section.

Is an entirelibrary of 100,000 books, and a Petabyte, would be 100 million books! Google counts there to be roughly 129 million books in the world How many photos can we fit in our book? This depends on your camera size, most smartphone cameras have a back camera of 6mp (megapixels) or more. Your phone squeezes and compresses this to free up more space. So our book (1gb) can hold about 475 photos. Full size resolution, about 50 pictures.

Scandisk has a very good breakdown. See figure below.

Scandisk -Number of pictures that can be stored on a memory device This shows the standard 16gb '16 book' sd card, can hold 800-8000 images @6mp quality. What else is stored on the sd? Video - A video is a glued together string of photos. Songs - depends on quality and type.

Anywhere from 1 page to several. Contacts - like words and sentences in our book. LOTS and lots.

App data - depends on the app or the document but generally not significant. With more and more phones eliminating removable memory cards and phone cameras increasing in resolution; hopefully you can now make a better decision between price and capacity. Visual conversion of Bits and Bytes 1 Bit = Binary Digit 8 Bits = 1 Byte 1000 Bytes = 1 Kilobyte 1000 Kilobytes = 1 Megabyte 1000 Megabytes = 1 Gigabyte 1000 Gigabytes = 1 Terabyte 1000 Terabytes = 1 Petabyte 1000 Petabytes = 1 Exabyte 1000 Exabytes = 1 Zettabyte 1000 Zettabytes = 1 Yottabyte 1000 Yottabytes = 1 Brontobyte 1000 Brontobytes = 1 Geopbyte.

Understanding the Basics of Digital Memory. What does it mean to have 100MBor even 32GB of storage? First, a byte is series of zeros and ones. For example; letter 'A' is represented by the set of bits or 1s and 0s; A=01000001.

A kilobyte(kb) is 1000 of these Bytes. 1000 kb is a megabyte(mb). 1000 mbs is a gigabyte (gb).

Still not making much sense? Let's Compare digital memory to a book. If bytes were individual letters in a book. It would take a kilobyte of them to fill a page. A megabyte is then a 1000 page Book and a gigabyte would be a 1000 book library section.

Is an entirelibrary of 100,000 books, and a Petabyte, would be 100 million books! Google counts there to be roughly 129 million books in the world How many photos can we fit in our book? This depends on your camera size, most smartphone cameras have a back camera of 6mp (megapixels) or more. Your phone squeezes and compresses this to free up more space. So our book (1gb) can hold about 475 photos. Full size resolution, about 50 pictures. Scandisk has a very good breakdown.

See figure below. Scandisk -Number of pictures that can be stored on a memory device This shows the standard 16gb '16 book' sd card, can hold 800-8000 images @6mp quality. What else is stored on the sd? Video - A video is a glued together string of photos. Songs - depends on quality and type.

Anywhere from 1 page to several. Contacts - like words and sentences in our book.

LOTS and lots. App data - depends on the app or the document but generally not significant. With more and more phones eliminating removable memory cards and phone cameras increasing in resolution; hopefully you can now make a better decision between price and capacity. Visual conversion of Bits and Bytes 1 Bit = Binary Digit 8 Bits = 1 Byte 1000 Bytes = 1 Kilobyte 1000 Kilobytes = 1 Megabyte 1000 Megabytes = 1 Gigabyte 1000 Gigabytes = 1 Terabyte 1000 Terabytes = 1 Petabyte 1000 Petabytes = 1 Exabyte 1000 Exabytes = 1 Zettabyte 1000 Zettabytes = 1 Yottabyte 1000 Yottabytes = 1 Brontobyte 1000 Brontobytes = 1 Geopbyte. Hi I am having an issue with a device in Device Management. I have added manually and I made sure the SNMP commmunity matches but I still see it with a?

I have done an snmo walk and I get the following The following is a SNMP walk of device 10.9.40.50 starting from system SNMP Walk Output -------------------------------------------------------------------------------- system RFC1213-MIB::sysDescr.0 = STRING: 'Cisco IOS Software, 7200 Software (C7200P-ADVSECURITYK9-M), Version 12.4(4)XD9, RELEASE SOFTWARE (fc1) Technical Support: Copyright (c) 1986-2007 by Cisco Systems, Inc. Compiled Tue 16-Oct-07 21:36 by pwade' RFC1213-MIB::sysObjectID.0 = OID: CISCO-PRODUCTS-MIB::cisco7206VXR DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (225645626) 26 days, 2:47:36.26 RFC1213-MIB::sysContact.0 = ' RFC1213-MIB::sysName.0 = STRING: 'MX-RTR-MPLS.mx.brightstar.com' RFC1213-MIB::sysLocation.0 = ' RFC1213-MIB::sysServices.0 = INTEGER: 78 SNMPv2-MIB::sysORLastChange.0 = Timeticks: (0) 0:00:00.00. So I've bene using time machine for a good few months now, and I'm very pleased.

From the start, by backups were something like 10 megabytes each. I could go for an entire day, and the size of the backup would be no more than 100 megabytes at worst. Now, however, the size of the backups routinely tops 2 gigabytes.

I am absolutely certain that I haven't generated that much new data on my hard drive. What's more, when I hit 'Back Up Now' immediately after a backup, it takes up about 110 megabytes. Why can possibly cause such a huge jump in backup size? I Recognize this: since a few weeks ago TM backups my total harddrive (about 100 Gb) every other day! It does incremental backups, but sometimes (mostly overnight) it just does a full backup. It still shows as a single Sparsebundel on my TimeCapsule.

SNMP MIB parser needs to handle duplicate object names------------------------------------------------------ Key: HHQ-923 URL: Project: Hyperic HQ Type: Bug Components: Plugins Reporter: Doug MacEachern Assigned to: Doug MacEachern Priority: Minor Fix For: 3.0.6The netdevice-plugin.jar includes the IF-MIB which defines object names such as ifNumber, ifIndex, etc. If these names are defined in another MIB (e.g.

OPENBSD-PF-MIB) with a different OID, the first one wins. We need to be to qualify, like so: OPENBSD-PF-MIB::ifIndexThe current workaround would be to disable the netdevice plugin in agent.properties which prevents the IF-MIB from being loaded:plugins.exclude=netdevice-- This message is automatically generated by JIRA.-If you think it was sent incorrectly contact one of the administrators: more information on JIRA, see: http://www.atlassian.com/software/jira.

[ ]John Mark Walker updated HHQ-923:--------------------------------- Fix Version: (was: 3.0.6) Version: 3.1.5 3.2.0Updating info - reported by Mirko. SNMP MIB parser needs to handle duplicate object names ------------------------------------------------------ Key: HHQ-923 URL: Project: Hyperic HQ Type: Bug Components: Plugins Versions: 3.2.0, 3.1.5 Reporter: Doug MacEachern Assignee: Doug MacEachern Priority: Minor The netdevice-plugin.jar includes the IF-MIB which defines object names such as ifNumber, ifIndex, etc. If these names are defined in another MIB (e.g.

OPENBSD-PF-MIB) with a different OID, the first one wins. We need to be to qualify, like so: OPENBSD-PF-MIB::ifIndex The current workaround would be to disable the netdevice plugin in agent.properties which prevents the IF-MIB from being loaded: plugins.exclude=netdevice-- This message is automatically generated by JIRA.-If you think it was sent incorrectly contact one of the administrators: more information on JIRA, see: http://www.atlassian.com/software/jira. Hi, we are developing a media application using IPP UMC classes. By trial and error we could see that we need the following #pragma comment (lib, 'h264_enc') #pragma comment (lib, 'umc') #pragma comment (lib, 'vm') #pragma comment (lib, 'vm_plus') #pragma comment (lib, 'h264_dec') #pragma comment (lib,'color_space_converter') This is because we had to include all the above projects from ipp-samples for the compilation/building to work. But the problem here is that, the built image size is a huge 11+ MB. Please let me know how this can be reduced by a good extent as we have a very critical requirement not to exceed the binary size to go beyond 2 MBs maximum.

Hi, I am working on Weblogic portal 9.2. In Virtual Content Repository I am seeing following property in advanced section of repository Search Enabled: false Search Indexing Enabled: false Full Text Search Enabled: true Streamable: false Binary Cache Enabled: true Time To Live (seconds): 3600000 Max Entries: 100 Max Entry Size (bytes): 1024 Node Cache Enabled: true Time To Live (seconds): 3600000 Max Entries: 100 I am not sure why Time to Live is set to huge time 3600000.

Is it by default provided by weblogic portal. What is significance of Binary Cache and Node Cache?? Because of this huge time 3600000 my content is not displaying on page.

After setting both the cache i.e Binary Cache and Node Cache to false I am able to see content on screen. Also instead of updating Time from weblogic portal administrator can I set Time To Live property in content-confix.xml or p13n-cache-config.xml. Time-to-live is actually specified in milliseconds. It looks like the label is incorrect. The node cache holds information about nodes. So if a user fetches a node, it will be stored there. And if another user (or the same user) later needs the node again, there is no need to access the repository.

While the nodes are shared between users, authorization is respected during retrieval. The binary cache holds binary data. So if a node has a binary property value such as an image, it will be stored in the binary cache if needed. This provides quick access to the binary value. Note the 'binary-cache-max-entry-size' setting on the repo config controls the maximum size of a binary which can be stored in the binary cache. So a binary which is 'too large' will not be cached.

Yes, you can configure the caches via META-INF/p13n-cache-config.xml. Cache config settings (and content-config.xml settings) are FIRST read from the deployment plan (if any). If not found there, they are read from the META-INF config files. There were several patches made in WLP 9.x to fix CM caching issues.

It's possible the issue you are describing has already been fixed. Be sure you're up-to-date on patches. -Steve Edited by: sroth on Nov 25, 2009 10:46 AM.

Good morning, I use KDS1.1.1 for micro K10. I created a compilation profile for updating my software using the bootloader. In the Toolchain I checked 'Create flash image' and in 'Cross ARM GNU Create Flash Image' I select Motorola S-record with option --srec-len=80 --srec-forceS3. The file created starts with: 'SF626F6F7348' but is not recognized by the bootloader. To solve it, I need that the first line is: 'S0030000FC' Is there any option in objcopy for KDS to remove the comment in the first line of the Motorola file, to get 'S0030000FC'? Thank you Regards Mirko. Good morning, I use KDS1.1.1 for micro K10.

I created a compilation profile for updating my software using the bootloader. In the Toolchain I checked 'Create flash image' and in 'Cross ARM GNU Create Flash Image' I select Motorola S-record with option --srec-len=80 --srec-forceS3. The file created starts with: 'SF626F6F7348' but is not recognized by the bootloader. To solve it, I need that the first line is: 'S0030000FC' Is there any option in objcopy for KDS to remove the comment in the first line of the Motorola file, to get 'S0030000FC'? Thank you Regards Mirko. Hi Windows Team I have been encouraging this to many companies and independent developers.

Please have a read here: For example: A CDR has 700 MiB. A DVDR has 4.7 GB or 4700 MB. Requesting Feature: Support for SI Units Historical context Once upon a time, computer professionals noticed that 2^10 was very nearly equal to 1000 and started using the SI prefix kilo to mean 1024. That worked well enough for a decade or two because everybody who talked kilobytes knew that the term implied 1024 bytes.

But, almost overnight a much more numerous everybody bought computers, and the trade computer professionals needed to talk to physicists and engineers and even to ordinary people, most of whom know that a kilometer is 1000 meters and a kilogram is 1000 grams. Then data storage for gigabytes, and even terabytes, became practical, and the storage devices were not constructed on binary trees, which meant that, for many practical purposes, binary arithmetic was less convenient than decimal arithmetic.

The result is that today everybody does not know what a megabyte is. When discussing computer memory, most manufacturers use megabyte to mean 220 = 1 048 576 bytes, but the manufacturers of computer storage devices usually use the term to mean 1 000 000 bytes. Some designers of local area networks have used megabit per second to mean 1 048 576 bit/s, but all telecommunications engineers use it to mean 106 bit/s. And if two definitions of the megabyte are not enough, a third megabyte of 1 024 000 bytes is the megabyte used to format the familiar 90 mm (3 1/2 inch), 1.44 MB diskette.

The confusion is real, as is the potential for incompatibility in standards and in implemented systems. Faced with this reality, the IEEE Standards Board decided that IEEE standards will use the conventional, internationally adopted, definitions of the SI prefixes. Mega will mean 1 000 000, except that the base-two definition may be used (if such usage is explicitly pointed out on a case-by-case basis) until such time that prefixes for binary multiples are adopted by an appropriate standards body. Please give us the choice to display the preferred units. Set the standard for others.

Bytes are not covered by International System of Units, so the prefix meaning may differ from SI definitions of kilo and so on. That is fairly true.

However the SI enjoys a broad usage across every field in science, industry etc. Thus not using it or using it the wrong way would confuse customers.

As it is the case today. Additionally it is hardto explain to non-proficient users and not being dissed for how bonkers computer scientists could be. Therefore I would recommend to integrate the binary prefix as well as the correct meaning of the existing prefixes. The default setting should be SI prefixes because users are used to them (in school, at work, etc.). Internally, say for developers, engineers, etc., both prefix variants are acceptable and useful. You say the TDMS is 300MB but how big is the index file?

If it is also large, say more than 1MB then you have a fragmentation problem, which is quite common. TDMS is a streaming file format and is very fast. To be fast it has some over head when flushing data to disk. This overhead is usually quite small, but writting this overhead data many times adds up, and you have the issues like you are describing. Here is an article talking about fragmentation: If you already have the files, and don't have control over how new ones are made, then all you can really do is run them through the defrag function. For large files it will take a very long time to process, but when it is done your files will be much smaller, and easier to manipulate.

Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW OverlordIf 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously. Hi there, we uses Forte C++6U2 and compiling a few libs (ca. 30, statically linked) to one static binary.

The libraries (with debug-info) are relative small. The binary is unusal large an contains many objects with the type LOCL. These objects are not used by the binary, so i belive that only the GLOB objects are really needed by the binary. How can i strip the objects with LOCL out of the binary (command 'strip' does not work), or better: did not put them into the binary right from the start. Binary with debug = 9 Mb binary strip = 6 Mb binary size that i like to have = 3 Mb (or less;-) Can anybody help me, having the same problem (big binaries, unused objects in binary) yours Dirk.

How do you know that the LOCL objects are not used? Some LOCL symbols might be inline functions that are generated out of line when you compile using -g.

The extra material should not be generated if you do not use the -g option. Normally you should not compile production code using -g.

The option disables some optimizations, and has other undesirable effects. If these hints do not help, I'd need to see an example. Another thought: For C++, the -g option causes local functions to be generated for each default function argument value.

I have a UserControl which creates a Ken Burns effect and blends images. Its Xaml defines contains two Image controls and two Storyboards which blend from one Image to the other by animating the Images' Opacities. It has a DependencyProperty of type ImageSource, which is bound to the Uri property of the ViewModel.

As soon as the ImageSource changes, it's used as the Image.Source of one of the Images. As soon as Image.ImageOpened is raised, the StoryBoard makes the previously visible Image non-opaque and the just opened Image opaque.

At the same time a further Storyboard is (re)calculated and started. It's defined in code and animates 5 properties of the new Image's RenderTransform/CompositeTransform. All this works like a charm and looks really beautiful.

After changing the ViewModel's Uri some 15 times, the control's memory usage is ~85 MegaByte. After changing the Uri further 100 times, the memory usage still is 85 MegaByte. Great: There's no memory leak. Bad: 85 MegaByte (plus the rest of the app) won't fit very well into the 90 MegaByte which are available on small memory Tango devices. So I tested without running any animation. The memory usage fell to ~20 MegaByte.

I did some calculations. The average source image is roughly 1000x1500 pixels with 3 bytes each, which makes 5 MegaBytes per bitmap. With two Image Controls (and the fact that Silverlight doesn't free ImageSources when it doesn't feel like) this makes 10 MB at a time.

This, plus the 10 MB which are used by graphics hardware (as reported by the FrameRateCounter and as implied by the automatic BitmapCaching for animated Controls) could explain the mentioned 20 MB. (By the way I have no idea if ApplicationCurrentMemoryUsage includes texture memory.) But. I can't explain the 85 MB. I can't explain the 30MB - 50 MB texture memory usage which are reported by FrameRateCounter.

Even if Silverlight would use the BitmapCache a second time (since there's a second Storyboard involved), this would sum up to 30 MB. This is not even near to 85 MB.

Any idea where the remaing 55 MB could have gone? Does ApplicationCurrentMemoryUsage include texture memory? Does Silverlight cache huge objects, simply because it can? Would the same app automatically use less RAM on devices with lower hardware specs? I indeed didn't manually set one of the two BitmapImage.UriSource to null. Thank you, Korhaan!

Now the memory consumption of the Control is relatively stable at 20MB (with 40MB at the time both images are visible), and texture memory stable at 10000 (20000). My conclusions: - texture memory as shown in FrameRateCounter is measured in KB - texture memory is included in Microsoft.Phone.Info.DeviceStatus.ApplicationCurrentMemoryUsage - the memory issue - including the requirement to nullify the UriSource of a system-created ImageSource - is probably a non-issue. On a 512MB device it never crashed and the graphics system probably allocates as much memory as it can. This is good because that's what RAM is for.

No idea how it behaves on a real 256MB device, but I guess the graphics system will simply allocate less memory for caching. - I'll nevertheless set any BitmapImage.UriSource to null, simply because it makes me feel better. Thank you Mark, thank you Korhaan!

I work on some C and C++ based projects, and use gcc and icc in alternation for quality reasons. I use rougly the same level of optimization with gcc and icc. On C based programs, the binary size generated by icc and gcc is quite comparable, +- 10%, makes sense. On the C++ based program, icc does much worse. Program consists of some 55.000 lines of C++ code, according to sloccount, and is Qt based.

Some rough figures of binary size: -O3 -s GCC: 1.75 MB ICC: 2.8 MB -O2 -s GCC: 1.65 MB ICC: 2.7 MB -O1 -s GCC: 1.6 MB ICC: 1.9 MB With icc, I do not use any special optimization like -ipo -parallel -xT, if I do, it gets even worse. While speed is important, the speed gain of icc are in the range of 10-15% and do not justify such increase in binary size. The main thing that puzzles me, is why icc's C++ binary size is so much worse than C binary, in comparison to gcc. Any ideas or suggestions?

I use the following: OpenSUSE 10.3 icc (ICC) 10.1 20080602 gcc (GCC) 4.2.1. I found the true culprit for the huge binary size: C++ exceptions. Lucily, I do not use them in my code. So I report the binary sizes with the new options: -O3 -s -fno-exceptions -fno-inline GCC: 1.25 MB ICC: 1 MB So ICC can actaully make significantly smaller C++ code than gcc, if we use -fno-exceptions -fno-inline even with -O3, which enables advanced optimizations. For me, to reduce the binary size from 2.8 MB to 1 MB is a big thing.

I found this tip in the following document: It is GCC and Apple specific, but the ideas and compiler options are also mostly valid for Linux and ICC. I am confused with this portion of your design statement: 'i burn the executable.bin into the flash using digilent adept flash tab at location 0'.

I am confused because next you state that '4) i initilze the bram with the elf file and download the download.bit. To the Nexyx2'. So are you downloading the bit file to the fpga from impact, or xps? Or are you programing the bit file into the flash and booting from the flash on startup?

Also, have you tried reading back the contents of the flash starting at flash offset '0' to verify that the contents match what you would expect to see in the.text section of your executable? You could verify this by doing a dump of the elf file and comparing this to the actual contents you read back from the external flash when the application is downloaded. I am using u-boot 2014.04 on a i.MX6 board (based on a nitrogen6x), I read documentation and posts here on the forum, I took multiple guides i.MX_6_Linux_High_Assurance_Boot_(HAB)_User's_Guide.pdf, How-to enable HAB in i.MX6.pdf, AN4581.pdf, secure_boot_on_imx6.pdf, HAB4_API.pdf, HABCST_UG.pdf, etc Then the BLN_CST_MAIN_02.01.01.tar.gz package, I was not able to find the secureboot_scripts.tar.gz package, according to the doc I need it because my uboot is bigger than 0x2F000 so I cannot statically allocate HAB data. I found the secure script utilities in imx-linux-test.git in /test/mxc_secureboot/V2012, there is no 2014 version but after looking at the script they look ok and automatize the job instead of doing it by hand.

I followed the guide from i.MX_6_Linux_High_Assurance_Boot_(HAB)_User's_Guide.pdf and installed the script according to the README from test/mxc_secureboot/V2012/README. I generated the keys with hab4_pki_tree.sh then the SRK file with srktool utility, it created a SRK_1_2_3_4_fuse.bin (32 bytes) and SRK_1_2_3_4_table.bin file (1088 bytes). Hi All,I have a Gen 1 TC that stopped working the other day, its a replacement TC from a few years back after the power supply died in the other one.It appears that the Wifi and the SwitchPorts are not talking to each other anymore.Have tried soft, hard, firmware down and upgrade.

Im showing ~35% mem usage on 4GB RAM with only one Firefox tab and two urxvt windows open. Im getting conflicting information. It looks like conky and firefox have both grabbed about half a gig together--even when I kill both and restart. Windows Live Mail import contacts is very very ssssloooooooooowwwww. I am importing about 2300 contacts from a CSV file created using the export function from WLM on the same computer before I had to do a complete Windows re-install. The export function created the file quite quickly.

The import has been going now for about 15 hours and has just gone past 2000 contacts so I guess it's got another couple of hours or so. This is quite extreme and ridiculous! Am I doing something incorrectly or missing something? In earlier mail clients like Outlook Express and Outlook this task took a matter of seconds. This looks like a huge step backwards, even a computer in the steam age would dispose of such a simple and mundane task without taxing it's single megabyte of ram. I have a customer using Netware 6.5 SP3 w/post-SP3 fixes, Apache 2.0 and Groupwise WebAccess 6.5.4. WebAccess gets used rigorously, lots of connections.

Every day the memory taken by Apache2.nlm grows by about 10 megabytes in size, today its consuming 150MB 's of RAM. If I unload and reload it, it starts at about 1 megabyte (so it can be reset) and then starts to grow again. I don 't mind creating a CRON procedure to unload and reload it weekly or something but wonder if there 's a better fix? Thanks in advance. Hi, when i try to programflashmemory on my spartan 3E with edk 9,1 service pack 2 i get the following error; JTAG chain configuration--------------------------------------------------Device ID Code IR Length Part Name1 01c22093 6 XC3S500E2 05046093 8 XCF04S3 06e5e093 8 XC2C64A_VQ44_1532Error Executing xmd Script: C:/EDK/data/xmd/flashwriter.tclError:: ERROR(201): Could Not Detect MDM Peripheral on Hardware. If FPGA is Configured Correctly2. MDM Core is Instantiated in the Design Done!

ALso i cant seem to create a.bin file in Edk Shell, iv tried this ------ 1. Create elf with no load set on FLASH memory mapped regions – namely ‘volatile.elf’ In XPS, select Project → Launch EDK Shell. $ mb-objcopy --set-section-flags.text=alloc,readonly,code --set-section-flags.init=alloc,readonly,code --set-section-flags.fini=alloc,readonly,code --set-section-flags.rodata=alloc --set-section-flags.sdata2=contents --set-section-flags.sbss2=contents./ExecuteFromFlash-XPS/executable.elf./ExecuteFromFlash- XPS/volatile.elf 2. Create binary image containing FLASH mapped sections, allowing subsequent download by Flash Writer – namely ‘flash.bin’ In XPS, select Project → Launch EDK Shell. $ mb-objcopy -O binary -j.text -j.init -j.fini -j.rodata -j.sdata2 -j.sbss2./ExecuteFromFlash-XPS/executable.elf./ExecuteFromFlash-XPS/flash.bin i just typed all this into EDK SHELL so maybe i left something out, cananyone help??

Working on ZC702 board: I generated uImage from vmlinux (which I generated thorugh petlinux flow) as follows: host# objcopy -O binary vmlinux vmlinux.bin host# gzip --best --force vmlinux.bin host#./tools/mkimage -A sh -O linux -T kernel -C gzip -a 0x00008000 -e 0x00008000 -n 'uImage' -d vmlinux.bin.gz uImage Once I got this uImage, I put this in an SD card. My irc client is jircii, which is written in Java. I noticed that on OSX, it has a 40 megabyte RSS and 600 megabyte VSZ. I tried another irc client written in Java, and it has nearly the same memory usage.

I wrote simple Hello World programs. The swing version has a 25 megabyte RSS, and the java.util version has a 12 megabyte RSS. Isn't that excessive for hello world?

Estimating the memory usage by subtracting a 25 megabyte Java overhead still leaves the irc client using 15 megabytes. I would like to see more reasonable resource usage. Does something like J2ME exist that targets the Mac and PC? I will look for logging when I find the time. Do you know of a good way to check for memory leaks? This code was not written by me, but I do have the source. I am new to Java, and stumbling through things.

My hobby systems tend to have many processes running at the same time. For comparison, the classic ircii client written in C takes 1 megabyte of memory. If I should expect a 70-fold increase in memory usage, then my resources may be inadequate for the total sum.

I was daydreaming about Java to write a 'world simulation', and a Java implementation of JavaScript as the user's extension language. The Moo folks were able to do a 'world simulation' 30 years ago with the resources of the day. I imagined I could do it in Java with 3 tiers: a client, a server, and a database back-end. Memory usage would probably be something for me to get a handle on right away. Hi, I'd like to find a way in change how my diskdrive reporting Capacity.All this occured during some partitioning with CD on a WD WDC-WD1600BEVS 160GB drive when it suddenly started to display all Capacities in binary format.Until now I've tried to solve it by wipe the whole drive and have reinstalled Windows 7 and also flashed Bios afterwords but it didn't seemed to help in turn it back to decimal.

Already when created a new partition with the Windows CD the value was in binary and not in decimal as it used to be. When watched inside Windows by help from the Disk manager it reports all other devices correctly in decimal values but my disk remains in binary. Have been checking the drive with a couple of other tools as well but everything shows status is ok. One odd thing, Bios still shows this WD disk as 160GB and also a few other diagnostic tools. Inside Windows and inside partitiontools it shows the capacity in binary format only.

Binary:For simplicity and consistency, hard drive manufacturers define a megabyte as 1,000,000 bytes and a gigabyte as 1,000,000,000 bytes. This is a decimal (base 10) measurement and is the industry standard. However, certain system BIOSs, FDISK and Windows define a megabyte as 1,048,576 bytes and a gigabyte as 1,073,741,824 bytes.

Mac systems also use these values. These are binary (base 2) measurements.WDC WD1600BEVS-07RSTOSerial nr: WD-WXE607138353Firmware version 04.01G04UDMA Mode 6 (Ultra ATA/133)160.04GB - Decimal Capacity.160.039.272.960 bytes. Total size.149.05-149.1 GB - Binary Capacity in Windows.Cache - 8192KB.NTFSS.M.A.R.T, LBA 48bit enabled (maximum 312581807), Health status ok.Bios Phoenix v1.9 11/12/07FS Amilo Li1718 NotebookCPU Intel Core Duo T2450 2.0GHz, x86, 2.0GB RAM. May it be the firmware that has to be installed once again and how come that partition-tools can mess up a disk like this, shouldn't a disk have protection for issues like this? Looking forward for help in this matter!

I'm trying to create a bootloader application for booting Microblaze from 32Mbit SPI flash.I've looked at the examples ateverything works fine. On power-up fpga is configured from SPI flash then bootloader starts from block RAM, it copies main program from SPI flash to DDR RAM and microblaze starts executing it.

But problems occur when I try to boot my application that is bigger and has 64KB of stack and heap (I need that for graphical user interface output for VGA display). In this case the bootloader starts, copies my application from SPI flash to DDR RAM and starts my application. But I only get some initial messages (like 'starting grapical user interface.' ) on the serial port and a small part of user interface drawn on VGA display and then the application stops. I've tried the following: - download my_application/executable.elf file using XMD 'dow' command and start it using 'run' command (it works perfectly) - modify my_application to write current DDR RAM content to SPI flash and reboot (the same problem as discribed above) - generate a binary (*.b) file from my_application/executable.elf with mb-objcopy, download binary file with XMD 'dow -data binary file ram start address', set program counter to 0x0 and 'run'. (similar problem as above, and also messages from my application that memory for some features could not be mapped - out of memory) Could these problems be due to stack or heap overflow?

If so how come it works fine if I download *.elf file? Is it possible that my bootloader application does exactly the same steps as XMD command 'dow my_application/executable.elf'? If anybody has any idea what is causing these problems or how to solve this, I would really appreciate it. Kind regards! Go to Solution.

My simple home page has exactly one js script: a clock. Every second, it takes a timer interrupt to update the clock. Each tick causes the memory usage, as shown by Win 7 task manager, to jump by almost 1 meg.

The 1 meg keeps jumping for about 30-40 seconds (!) and then drops back to nearly the original value. This is classic 'lazy, let the garbage collector do it' behavior that has been around since Lisp. I shudder to think what AJAX does, likely gigabytes with a dog-meat slow garbage collection cycle. I've seem 1 gig levels when I have a couple sites open. So, when will a Firefox be available that doesn't use multiple 100's of megabytes for the nil page or 189 megabytes for this (and only) this page? My simple home page has exactly one js script: a clock.

Every second, it takes a timer interrupt to update the clock. Each tick causes the memory usage, as shown by Win 7 task manager, to jump by almost 1 meg. The 1 meg keeps jumping for about 30-40 seconds (!) and then drops back to nearly the original value. This is classic 'lazy, let the garbage collector do it' behavior that has been around since Lisp. I shudder to think what AJAX does, likely gigabytes with a dog-meat slow garbage collection cycle. I've seem 1 gig levels when I have a couple sites open. So, when will a Firefox be available that doesn't use multiple 100's of megabytes for the nil page or 189 megabytes for this (and only) this page?

For ESX hosts SNMP management, you can use HP SIM, Dell OM, IBM Director agents and monitor them. You can also use Solarwinds VM Monitor, Veeam Monitor, vFoglight, Nagios, SCOM 2007, and bunch others available so it depends.

If you just want to monitor your virtual machines, than using free tool such as Nagios is fine. We're running Dell Open Manger to monitor our ESX farms basic monitorings. For our Windows servers, we're using SCOM 2007 to monitor everything through SCOM agents deployed. We also integrated SCOM 2007 to monitor ESX hosts but its just simple basic. To maximize all ESX monitoring, you can use nWorks Management Pack for ESX and it works perfectly with SCOM 2007. So, I'm not sure what you want to accomplish, but you can decide from the details above and implement them. If you found this information useful, please consider awarding points for 'Correct' or 'Helpful'.

Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware, Citrix, Microsoft Consultant. String is immutable so it has to copy your char[] to a new char[] backing the String. If it used theSure thing, but I'm passing on a byte[] array, and afaik, it's not using this array and should only copy it.

The copy you pass in will also still be allocated - until it is no longer referenced. Oh hey, could it be that the bytes are saved in unicode? That would double the size. And make sense.Yes, a char is a 16 bit entity. Oh and PS; Maybe you should read [url=.html]this if you really think strings areimmutable.

(although you are generally correct:)) I have used Reflection to gain access to the internals of String to avoid the overhead of toCharArray() creating a new char array - but I stop short of actually changing a String. Hi, I have a weird problem with mb-objcopy, it produces different SREC files for the same elf file, depending on the path of teh output file. Hi, I have a weird problem with mb-objcopy, it produces different SREC files for the same elf file, depending on the path of teh output file. I have been trying to hard to develop a snmp sub agent (eterprise specific ), I created all the binaries and respective.acl,.reg,.rsrc files for my subagent, even binary also i generated and registered with master agent! Everything seems to be working if the oid i poll for is from mib-2, but if i give my enterprise OID from mib i developed the snmpdx says core dumped! When i tried to open the core file the following text i cud see! Error while receiving a pdu from%s:%serror while receiving a pdu from%s:%sbad PDU type (0x%x) received fro m%sno variable in PDU received from%s!!

I have been trying to hard to develop a snmp sub agent (eterprise specific ), I created all the binaries and respective.acl,.reg,.rsrc files for my subagent, even binary also i generated and registered with master agent! Everything seems to be working if the oid i poll for is from mib-2, but if i give my enterprise OID from mib i developed the snmpdx says core dumped! When i tried to open the core file the following text i cud see! Error while receiving a pdu from%s:%serror while receiving a pdu from%s:%sbad PDU type (0x%x) received fro m%sno variable in PDU received from%s!! Hi, I am working with the SP605 board and i have written a custom bootloader that reads the compact flash and searches for a files called IMAGE.BIN (FAT32 filesystem) and copies it to the RAM (starting at 0x88000000 and so on).

What I have done is compile the code I whant to execute (taking into account that it will be placed on the RAM by using the linker script editor) and then convert the ELF to BIN with the mb-objcopy tool. The bootloader actually finds the file and loads it correctly to the RAM (I have opened the.bin files created with mb-objcopy with a hex editor and the contents match the ones of the RAM at address 0x88000000) the problem I am having is that once this is done I don't really know how to jump to the first instruction 0x88000000. Should i reset all 32 registers and set PC to 0x88000000 and if so, how do i do that? I have already tried with the 'goto' instruction and any other that i could think of. I have found some bootloaders intended mainly for loading linux images that do the following: typedef void (*void_fn)(char *);char *cmdline = 'console=ttyS0'; LOAD PROGRAM TO MEMORY kernel_start = (void_fn)XPAR_MCB_DDR3_MPMC_BASEADDR;(*kernel_start)(cmdline);cleanup_platform(); This does not work for me and I assume it is because it is though to be used with linux images.

What should I do? When transforming the.elf to.bin i got some 'insuficient space in device' which i solved by using a rather elaborate way of getting the image.bin image, those commands were:./mb-objcopy -O binary -j.vectors.reset -j.vectors.sw_exception -j.vectors.interrupt -j.vectors.hw_exception IMAGE.elf IMAGE.BIN./mb-objcopy -O binary -R.vectors.reset -R.vectors.sw_exception -R.vectors.interrupt -R.vectors.hw_exception IMAGE.elf IMAGE2.BIN cat IMAGE.BIN IMAGE2.BIN Please help me. It is very frustrating to have managed to read FAT32 files without any upper-level libraries and be stuck right at the end. The Sql Server database that I used to port to the Sql Server Compact 4 database uses 53 megabytes of data and.5 meg of indexes. The Sql Server Compact database that got created bloated to 265 megabytes which is 5 times bigger!

I have tried to compact it and shrink it but it has little effect. Does it help to make it readonly (assuming you can do that)? FYI, the way the handy/cool tool I'm using is the Sql Ce 4.0 Toolbox and was developed by Twitter: @ErikEJJohn Marsing http://MyHebrewBible.com/. I just saw the Add Entity Data Model to Current Project. Itried it out and it works cool! I triedAdd code generation Item from the edmx but they don't have the ADO.Net DbContext Generator T4 template.

Do you know If I can get this from Nuget? I will search it out.

I will also try your suggestions about creating a library object. Thanks a lot FYI, I added a couple of more tables to my database and went through the process of rebuilding my Compact 4 database but this time with your Add-In.

I'm still getting the bloat (it's over 300 meg now). I looked for the maintenance compact/shrink menu like the one you had with the stand alone versionbut couldn't find it. Oh well having fun anyway JohnJohn Marsing http://MyHebrewBible.com/. Hi, I am peterson, I am using Spartan3e starter kit having spartan3e FPGA with 500k gates, one xcf04s platform flash and one SPI flash (M25P16) and other peripherals. I want to store my Microblaze+Bootloader(BRAM Based) bitstream inside xcf04s platform flash and upon bootup I want my bootloader to load a software application from SPI flash to DDR RAM and start executing.

I did the first step i.e. Microblaze+BRAM Initialized Boot Loader stored inside xcf04s platform flash in the form of an MCS file, and the boot loader started up correctly by displaying greeting message. However I am getting confused how to put my 'user.elf' file inside the SPI flash. I reada couple of forum messages, telling the typical steps which are.

1) convert the 'user.elf' file in binary, using mb-objcopy command, that I did 2) them merging this binary file with an existing mcs file However I dont have any mcs to be merged with at this point, because the microblaze+BRAM based boot loader is stored inside the xcf04s. I would like if someone could help me out in this regard. (Infact I want to seggrigate my hardwrae+bootloader from user software, being the requirement of application) Plus I did one more thing that I send this binary file to my BRAM initialzed microblaze application through UART, that received incoming bytes of DDR based software application, and start writing it at the start of DDR RAM, and upon completion jumping to start of DDR RAM for start execution using function pointer (A typical method mentioned in several application notes). The same technique was working and correct for most DSP Processors I had been working on in the past, but I am not getting how to do it here on microblaze, coz its not working. I am using 'using SPI flash' demo which is available from xilinx at I studied all the documentation in detail, then I updated its bootloader for copying user application from the start of SPI flash for my scenario. (Instead of Sector-6, as in that particular demo) I studied some other application notes also, targetting Intel Strata Flash, and storing data, software and bitstream inside platformflash etc, but couldnt get some solution to my problem.

If someone could help me, I will be so much greatful! Thanks in advance, Regards, David Peterson. We recently upgraded from SANscreen 5 to 6, and in version 6 I believe the IBM XIV reporting has been 'fixed'. When an IBM XIV reports 1 GB, it is decimal (i.e.

1000 MB per GB, 1000 KB per MB, etc). In version 5 of SANscreen it would report it as reported by the XIV. In version 6 it appears to (correctly) change this to binary (i.e. 1024 MB per GB). This is great, but it has caused a huge dip in my provisioned capacity report. Hosts that SANscreen used to think had 1098 GB now correctly show as 1024 GB. Is there a way for me to fix the old data in my reporting?

If I drop the DB and build from history will that fix it, or will it just re-acquire the bad data from SANscreen for pre v6? Hello, I'm trying to get some stats from a context created on an c6500 ACE20-MOD-K9. Hello, I'm trying to get some stats from a context created on an c6500 ACE20-MOD-K9.

Add TCP-MIB metrics to system plugin------------------------------------ Key: HHQ-1224 URL: Project: Hyperic HQ Type: New Feature Components: Plugins Reporter: Doug MacEachern Assigned to: Doug MacEachern Fix For: 3.2.0Expose metrics added by SIGAR-63 to platforms supported in the system plugin.-- This message is automatically generated by JIRA.-If you think it was sent incorrectly contact one of the administrators: more information on JIRA, see: http://www.atlassian.com/software/jira. [ ] Kashyap Parikh closed HHQ-1224:-------------------------------Looks good Add TCP-MIB metrics to system plugin ------------------------------------ Key: HHQ-1224 URL: Project: Hyperic HQ Type: New Feature Components: Plugins Reporter: Doug MacEachern Assignee: Doug MacEachern Fix For: 3.2.0 Expose metrics added by SIGAR-63 to platforms supported in the system plugin.-- This message is automatically generated by JIRA.-If you think it was sent incorrectly contact one of the administrators: more information on JIRA, see: http://www.atlassian.com/software/jira. Hi Shanti, Did you try Jdweng's suggestion?

How about the result? Here is one morepossibility: The user identity is not unauthorized.

Please try this same code with another user account, or you can try to run as administrator. Here is another similar thread: I hope this will be helpful. Best regards,Mike Feng MSDN Community Support Feedback to us Develop and promote your apps in Windows Store Please remember to mark the replies as answers if they help and unmark them if they provide no help.

I have a project use microblaze as a sub-module. SPI FLASH(M25P16) use to store FPGA bit file and mb elf file,I have some problem with spi bootloader. I do the following steps: STEP 1. In ISE, implement design and generate program file, now I got a bit file(for convenience, named abc.bit). I think abc.bit can only config FPGA to run, but didn't include mb software code. In EDK, export Hardware to SDK.

In SDK, Iwrite a software project: myapp, this is the main software, runing in DDR, work fine in SDK debugger. After testing, translate myapp.elf to myapp.srec with this cmd: mb-objcopy -O srec myapp.elf myapp.srec. STEP 4: In SDK, I write another software project: bootloader, runing in BRAM on boot, calling xilisl lib to load srec format elf from FLASH to DDR and exec. STEP 5:In EDK, add a elf only software project, the elf filepoint to bootloader.elf, and this projectmarkedto initialize bram. I do this because ISE provide a tcl toinit bram, so i can avoid use command line to run data2mem or bitinit.

STEP 6:In ISE, run 'Update Bitstream with Processor Data', now I got abc_download.bit, I think abc_download.bit has bootloader to run. STEP 7:In Impact, combine abc_download.bit(has include bootloader in BRAM)and myapp.srec into single mcs file, and download to SPI FLASH.

But after power on FPGA, it didn't work. STEP 8:Back to SDK, lanch bootloader in debuger, step by step, I can see myapp.srec read form SPI FLASH correctly, myapp run OK after bootloader, I even can commicate with myapp through UART. STEP 9:In ISE, another project(no embeded cpu) has validate config FPGAby SPI FLASH. Bootloader is right, myapp is right,SPI FLASHconfig FPGA is right, what's wrong? Did I miss some thing? When you say, 'after power on FPGA, it didn't work,' can you be more specific? Did the FPGA configure at all?

If the FPGA configured, did you have any indication that the bootloader was running, like UART output? If the bootloader was running, were any S-records processed?

What was the failure mode? I created a (nearly) identical design, with the difference being that I did not use ProjNav with MicroBlaze sub-module. I worked directly from EDK/SDK. I had a problem where the FPGA configured, bootloader ran, and a few S-records were processed after which it hung. Turns out that I hadn't assigned ALL sections in my main app to run from DDR. As the bootloader ran along, it eventually trashed itself by copying pieces of the main app into BRAM.

Please provide more detail on when and how the failure occurs. What is the difference between SNMP v2 and SNMP v3? Main difference between SNMP v2 and SNMP v3 are the enhancements to the security and remote configuration model. SNMP v3 adds cryptographic security to SNMP v2.

SNMP v3 replaces the simple password sharing (as clear text) in SNMP v2 with a much more secure encoded security parameters. Due to the introduction of new conventions for texts, concepts and new terminology, SNMP v3 looks different than SNMP v2 (even though there aren’t many changes). Thanks- Afroz [Do rate the useful post] ****Ratings Encourages Contributors ****. So I know all about tmpDirectory= and I know about useNamedFile=false but in no combination do these two options let vmware on a Linux host OS efficently use 'hugepages'. There needs to be a way to put the ram0 files, and _just_ the ram0 files, into a particular directory.

Background: The linux kernel allows allocation of 'huge' memory pages, use of which greatly improves memory access speeds for very large applications. The system administrator makes this available by allocating some memory to the huge page system and then mounting a hugetlbfs psudo filesystem. Applications then create and mmap() files in this file system to allocate huge pages. This is a win because thes pages are typically multiple-megabytes each in size instead of just 4k each. This greatly reduces virtual memory costs associated with loading/reloading page descriptors. (Fortechnical details look up TLB a.k.a Translation Lookaside Buffers.) Uising this method some applications that use large amounts of memory can be accelerated substantially. The win for huge pages is, well, huge; for example the qemu-kvm open-source virtualization system will gain nearly 10% execution speed running a windows guest in a linux host when huge pages are used.

VMWare Workstation is already compatible at a system call level wiht the huge page allocation system. That is, they way the products mmap() the ram0 (etc) files is exactly what is needed. But since this always happens in 'tmpDirectory' or always happens in /tmp if you use the useNamedFile option means that either the files end up in the wrong place or are accompanied by a lot of files log files etc.

That don't deserve multi-megabyte memory allocations. I am hoping there is a secret ramPath= (or similar) setting that can redirect _just_ the ram file mappings to a particular location. (Directories in the huge page file system are okay so it is okay if the option works like tmpDirectory in creating a subdirectory, as long as the stupid log files and other noise dont go there too.) Is there a secret option of this type or someone to nag to make this happen? In the alternate, it would be magically special if vmware had a 'use huge pages' check box and would then just look for a mounted and writable hugetblfs file system if that box were checked. I did some more research and if the VMware products would use the MAP_HUGETLB flag for all their memory mappings of the ram backing file this would magically improve performance system wide regardless of the location of the file and remove any remaining issues and be compatable with or properly ignorecd by every currently in-use linux kernel. The 2.6.38 kernel with Transparent Hugepages set to 'always' will scavenge up these mappings into hugepages but it isn't quite as efficent at optimal mapping as woudl be the case with explicit use of the flag since the leading and trailing pages, if not on the hugepage boundaries (which would happen automagically with the MEM_HUGETLB flag set) lead to either rounding or over-allocation or under-use of the feature [don't know which]).

I haven't dug into the module soruce.to see if this can be coerced there as the real-world project I am on just barely gives me time to notice and gripe about this issue. 8-) NOTE: with transparent hugepages activated some things finish faster in a VM then they finish in native windows. In particular I have been using the latest 'MyDefrag' just to exercise the disk and memory 'aggregate feel' timings, since it does memory intensive disk intensive system access and between the Host OS caching and condensation of some of the disk flushs and the better memory behaviors, well its kinda stunningly fast. Up to a point, adding RAM (random access memory) will normally cause your computer to feel faster on certain types of operations. RAM is important because of an operating system component called the virtual memory manager (VMM). When you run a program such as a word processor or an Internet browser, the microprocessor in your computer pulls the executable file off the hard disk and loads it into RAM. In the case of a big program like Microsoft Word or Excel, the EXE consumes about 5 megabytes.

The microprocessor also pulls in a number of shared DLLs (dynamic link libraries) -- shared pieces of code used by multiple applications. The DLLs might total 20 or 30 megabytes. Then the microprocessor loads in the data files you want to look at, which might total several megabytes if you are looking at several documents or browsing a page with a lot of graphics. So a normal application needs between 10 and 30 megabytes of RAM space to run.

On my machine, at any given time I might have the following applications running: * A word processor * A spreadsheet * A DOS prompt * An e-mail program * A drawing program * Three or four browser windows * A fax program * A Telnet session Besides all of those applications, the operating system itself is taking up a good bit of space. Those programs together might need 100 to 150 megabytes of RAM, but my computer only has 64 megabytes of RAM installed. The extra space is created by the virtual memory manager.

The VMM looks at RAM and finds sections of RAM that are not currently needed. It puts these sections of RAM in a place called the swap file on the hard disk. For example, even though I have my e-mail program open, I haven't looked at e-mail in the last 45 minutes. So the VMM moves all of the bytes making up the e-mail program's EXE, DLLs and data out to the hard disk.

That is called swapping out the program. The next time I click on the e-mail program, the VMM will swap in all of its bytes from the hard disk, and probably swap something else out in the process. Because the hard disk is slow relative to RAM, the act of swapping things in and out causes a noticeable delay. If you have a very small amount of RAM (say, 16 megabytes), then the VMM is always swapping things in and out to get anything done.

In that case, your computer feels like it is crawling. As you add more RAM, you get to a point where you only notice the swapping when you load a new program or change windows. If you were to put 256 megabytes of RAM in your computer, the VMM would have plenty of room and you would never see it swapping anything. That is as fast as things get.

If you then added more RAM, it would have no effect. Some applications (things like Photoshop, many compilers, most film editing and animation packages) need tons of RAM to do their job. If you run them on a machine with too little RAM, they swap constantly and run very slowly. You can get a huge speed boost by adding enough RAM to eliminate the swapping.

Programs like these may run 10 to 50 times faster once they have enough RAM! Hi This may be a old and simple question, but I wasn't able to find a proper answer although I was surfing the net for days. Im new to Sabre Lite.

I installed LTIB and checked-out the i.MX6 kernel 4.1.0 source and compiled it as pointed out in the following post. I believe that it went smooth and I got the uImage created under following path ~/linux-imx6/arch/arm/boot/uImage. Following is the last few lines of console output of the compilation. Download Linux documentation package from Freescale web site from here: In Linux User's Guide there's instructions on how to create SD card that can be booted from. However, Freescale documents refer to SABRE SDB boards, SABRE Lite is different as i.MX 6 boots from SPI flash at first, I believe. Basically for SABRE Lite you can just have fat32 partition on SD card containing uImage and a ext3 partition with the file system you'd like to use. Then modify u-boot script accordingly.

For ease of use, you might look at Yocto: Freescale/fsl-community-bsp-platform GitHub Supported boards are listed here: Freescale/fsl-community-bsp-base GitHub There's instructions here on i.MX Community as well how to get started with Yocto. Hi, I am developing a application which handles thousands of data received by the device. Device sends in very high data rate say 500 data/second. My application should process all the data and store it in disk for further processing. There shouldn't be any data loss. Each data contains 3-4 numeric values and application can run more than a hour. So for an hour application receive 60 * 60 * 500 = 1800000 data packets, which is 5400000 floating values.

I tried XML and Binary serialization to store these values into disk, but found that stored file size is huge (in Megabytes). And handling data retrieval is bit more time consuming. Then I used SqLite database for storing. And found performance is very good, but still database size becomes somewhat larger after storing it. Can anyone suggest best/efficient way to store these numeric data. Data save and retrieval should be faster and also stored data size also should be minimal.

Hi, There r some problems in menu. When first time i am pressing short cut keys operation s perfor. If you want speed, then you want to do as little as possible with the data as it comes in. If you want small, you have to spend processor time compressing the data. These two aims are not compatible. 5.4 million of anything is going to take some room to store and floating point values are, what?, 8 bytes each these days (IEEE binary64), so that's 42 MB and change for just the raw data.

Now, you can feed all of that into a flat file over the course of an hour no trouble at all - let's face it, MS Word can do it in a few handfuls of seconds - but getting it back is going to take a while. Do you need to retrieve the data in real time?

I would suggest capturing the data quickly, straight into a file, and then re-read it (at a more leisurely pace) using a second program. In theory, you could even have both running at once, so long as the reader/processor program is slower than the writer/capturer (or, at least, is polite enough to wait for new data to be written).

Of course, you could also do some clever buffering between two threads in a single program, if you feel up to it. Regards, Phill W. The subject line says it all. I have discovered that LR reduces the size of files transferred to Photoshop for further editing when compared to the same RAW file processed in ACR in Photoshop. For example, a psd file created by LR from a 36 MB RAW file will typically result in a 100 MB psd file.

The same RAW file processed similarly in Photoshop will result in a file around 300 MB. This makes a huge difference when the files are converted to JPEGs for printing.

The smaller LR psd files result in JPEGs typically under 500 KB where the Photoshop JPEGs are typically around 1.5 MB. DSalk wrote: This makes a huge difference when the files are converted to JPEGs for printing. The smaller LR psd files result in JPEGs typically under 500 KB where the Photoshop JPEGs are typically around 1.5 MB. You've only identified why the adjusted PSD 'file size' is bigger, not the JPEGs. As we've pointed out the this change in PSD file size has no impact on the image size or its quality. In order to rotate or change the image perspective the background layer must be placed into layer mode, which increase the PSD file size by ~2x when saved.

The image resolution and document size remains unchanged! After performing image rotation and vertical perspective correction on the PSD file the image will have to be cropped to restore straight sides. This reduces the document size (pixel resolution) and JPEG file size compared to the unadjusted PSD file. This assumes both conversion use the same PS JPEG Quality setting. The smaller JPEG file (i.e. 500KB?) is actually being created from the larger PSD file!

The only reason it's quality may seem lower is because the image has been cropped and now has a smaller document size (pixel resolution). This has nothing to do with the original PSD file size. Compare the 500KB and 1.5MB JPEG images in Bridge using the thumbnail Preferences settings I suggested. The 500KB image file (from edited PSD) should have a smaller 'Dimensions (in inches)' due to cropping. Hello, i want to load dynamically binaries to bram of microblaze and execute them.

For creating them I found following solution in xps: 1. Compiling an elf-file with options -nostartfiles and setting up the startadress from which the programm have to be executed later on 2. Convert an elf-file to bin-file with Mb-objcopy. I am looking for a possibility to do it from sdk. The thing is, that there is no field in sdk (i found any) to define a start adress. If i define it manually, the file is filled with zeros until this startadress in the later converted bin-file. Therefor my qustions: can i do it in sdk?

Is there a possibility to create binaries for bram without binding to a fixed start address? Thanks for your suggestions! Hello, we are using Ethernet echo backapplication (Xapp1026 sock version) and build the project and generated the bit file and successfully tested it on ML505 board.

Our software application couldn’t be loaded into the BRAM therefore we had to use SDRAM with Stack and heap sizes changed to 0x4000, which executed successfully. Next step was to download the application in the serial flash so that application could be loaded into the FPGA on power on reset or soft reset. Which required a bootloader. For that we took following steps. Downloaded our elf file in the linear flash memory (we check auto-convert to srec format). Created a bootloader application by checking “create bootloader” option in “Program Flash Dialog and marked it to initialize BRAM. Updated the bitstream.

Created MCS using download.bit file. Downloaded the MCS to serial flash. After power reset application was running perfectly when ML505 was directly connect to the PC. And characters were echoed back on local host through telnet. Now when we connected the board using a switch, telnet command failed to connect the PC to the board. WE did another experiment.

To debug we added few xilprintf statements to locate the problem. In result we found that application binds the socket and opens it in listen mode. But no print statement after the accept command is executed indicating that either it’s not receiving any request or its not accepting any request. But Rx LED blinks off and on. This debugging shows that part of the code is executed till “lwip_accept” function of echo.c to connect to the host. While MCS was downloaded to the PROM we downloaded elf file using XMD and it worked even when switch was involved.

Which indicates that may be complete software code is not loaded from flash to SDRAM. WE did another experiment to cater the above mentioned problem. To extract the read only sections of the code into the srec file following command was used.

$ mb-objcopy -O srec -j.text -j.init -j.fini -j.rodata -j.sbss2 -j.vectors.reset -j.vectors.sw_exception -j.vectors.interrupt -j.vectors.hw_exceptionexecutable.elf flash.srec Using mb-objcopy command we generated srec file and downloaded it to flash and repeated the same steps as mentioned in previous bullets. On startup only 2 print messages are received on serial port. EDK Boatloader.

Program starting at address: 0x00000000 This experiment was repeated by downloading bin file in place of srec file created through mb-objcopy command. Result was “Program starting at address: 0x00000000” message was received repeatedly. I wonder why the code does not execute any further, any suggestions.

Or kindly point out if any step is missing or wrong. Tools used are: EDK 10.1 (SP3) Xilenx 10.1 (SP3). Hello, we are using Ethernet echo backapplication (Xapp1026 sock version) and build the project and generated the bit file and successfully tested it on ML505 board. Our software application couldn’t be loaded into the BRAM therefore we had to use SDRAM with Stack and heap sizes changed to 0x4000, which executed successfully. Next step was to download the application in the serial flash so that application could be loaded into the FPGA on power on reset or soft reset. Which required a bootloader.

For that we took following steps. Downloaded our elf file in the linear flash memory (we check auto-convert to srec format). Created a bootloader application by checking “create bootloader” option in “Program Flash Dialog and marked it to initialize BRAM. Updated the bitstream. Created MCS using download.bit file. Downloaded the MCS to serial flash. After power reset application was running perfectly when ML505 was directly connect to the PC.

And characters were echoed back on local host through telnet. Now when we connected the board using a switch, telnet command failed to connect the PC to the board. WE did another experiment. To debug we added few xilprintf statements to locate the problem. In result we found that application binds the socket and opens it in listen mode. But no print statement after the accept command is executed indicating that either it’s not receiving any request or its not accepting any request.

But Rx LED blinks off and on. This debugging shows that part of the code is executed till “lwip_accept” function of echo. Driving Test Manitoba Online. c to connect to the host. While MCS was downloaded to the PROM we downloaded elf file using XMD and it worked even when switch was involved.

Which indicates that may be complete software code is not loaded from flash to SDRAM. WE did another experiment to cater the above mentioned problem. To extract the read only sections of the code into the srec file following command was used. $ mb-objcopy -O srec -j.text -j.init -j.fini -j.rodata -j.sbss2 -j.vectors.reset -j.vectors.sw_exception -j.vectors.interrupt -j.vectors.hw_exceptionexecutable.elf flash.srec Using mb-objcopy command we generated srec file and downloaded it to flash and repeated the same steps as mentioned in previous bullets. On startup only 2 print messages are received on serial port. EDK Boatloader. Program starting at address: 0x00000000 This experiment was repeated by downloading bin file in place of srec file created through mb-objcopy command.

Result was “Program starting at address: 0x00000000” message was received repeatedly. I wonder why the code does not execute any further, any suggestions. Or kindly point out if any step is missing or wrong. Tools used are: EDK 10.1 (SP3) Xilenx 10.1 (SP3). Towards the end of last billing cycle, I knew I was coming close to my 1GB of data limit. So on the night of the end of my billing cycle, with 3 hours until the next billing cycle began, with 24 megabytes to spare on my data plan, I disabled all mobile data on my phone, put it into airplane mode, and went to sleep. I open my.PDF bill a couple days later to find that I was billed $20 for extra data.

It made me sick that a such a huge corporation has such fraudulent billing charges. According to the billing, in that 3 hour period, I had mysteriously used 60 megabytes of data. On my phones data log for the ENTIRE DAY, I am at 1 megabyte. How many people has ATT fraudulenty charged $20 extra or more for other overages that are close enough to be dispuatable. Is this a common practice? Luckily, I took screenshots of my ATT data usage directly from their website that night to protect myself because I had a hunch that they would try to charge me for overages, and my hunch was correct.

THIS IS A BREACH OF CONTRACT. Contoh Soal Tes Toefl Dan Jawaban Pdf. THIS IS FRAUDULENT. I wonder if an investigation by the FCC or FTC or the Bureau of Consumer Protection would turn up a very slimey pattern of billing.

Also, when I signed up, I was lied to by the sales associate. I was told that my USG State Department discount of 15% would apply to the entire bill. Instead, it applied only to a small portion of the bill giving an effective discount of 3%. More slimey practices. I am pretty disgusted. Don't you find it strangely convenient that upgrading to the 10gb data plan was only $10 more, but yet they charge you $15/gb if you go over?

What better way to get more money out of someone then blast them with data charges and make them pay for it though overage charges and then offer a simple solution to solve it all once they call in and complain (ie. $10/month upgrade).

Sounds like a win/win situation for ATT if you ask me. Either you pay the overage charges or you pay to upgrade. Either way, they're getting paid.